Quality Performance in the Refugee Appeal Division 2016-17

Report of Results

Prepared by:
Policy, Planning and Corporate Affairs Branch

Table of Contents

1.0  Background

Purpose

This report describes the results of the measurement of quality in decision-making in the Refugee Appeal Division (RAD) and, equally, the effectiveness of the methodology piloted here for the first time. The Immigration and Refugee Board of Canada (IRB) has evaluated quality through an independent, objective process since 2011. This process has evaluated the other divisions of the IRB on an annual basis and employs former members of the respective divisions to conduct the field evaluations. The RAD came into force in December 2012 and as a new division, has not yet given rise to a pool of former members with the necessary expertise to assess RAD quality. Noting the ongoing lack of qualitative performance information needed by the Chairperson and managers of the RAD, the IRB undertook this inaugural study by employing an independent, outside expert to assess a sampling of RAD files.

Overall Approach

As with the quality studies of the other divisions, this study examines the key indicators of quality that align with the IRB’s overall expected results for decision-making excellence:

  1. Timely and complete pre-proceeding readiness 
  2. Respectful proceedings
  3. Focused proceedings
  4. Clear, complete, concise and timely decisions

The key approach of this study is driven by the RAD’s unique adjudicative process—the vast majority of appeals are heard as a paper proceeding. Whereas other divisions are assessed by listening to the recording of the hearing and reviewing the documentation in the record, quality appears differently and in different places in a paper appeal. As such, the standard IRB quality checklist underwent a top-down revision to align it with the loci for quality in the RAD. As always, the correctness of the decision-maker’s findings falls outside the scope of the study.

Checklist Design

The standard IRB quality checklist consists of over 30 questions examining how well the member and registry prepare for the oral hearing, the respectful conduct of the hearing, its efficiency, whether the reasons logically link to the evidence adduced, and timeliness of the decision. Without the oral hearing as the focal point of quality, the checklist for the RAD was designed with reduced attention to proceedings quality and more focus on reasons completeness and transparency. A second checklist was designed to assess oral appeals and does not differ significantly from the standard checklist. In addition, the evaluator consultant developed and tested seven supplementary questions to produce a potentially fuller analysis of results. Each checklist question is assessed along one of two rating scales, one a dichotomous yes/no, the other a 1-to-3 ordinal scale. Appendix C and D outline the checklist questions and ratings scales.

Sample Design and Selection

The study examined 61 of the 257 appeals decided on the merits during the months of March and April 2016. The files were randomly selected in proportion to: region, claimant and Ministerial appeals, paper and oral appeals, and language of appeal. Of the 24 members who decided at least one appeal, 21 are found in the sampling. Results are considered accurate to within 9 percent, 9 times out of 10. However, the goal of this study was not the generation of statistics, but to identify areas of strength, concern, patterns, establishing a baseline for future studies, and testing the methodology itself.

The following charts illustrate the sampling makeup:
sampling makeup
[Alternate format]

Members

  • Western: 33
  • Central: 13
  • Eastern: 5
 

Regional Office

  • Western: 5%
  • Central: 67%
  • Eastern : 28%
 

Paper/Oral

  • Paper: 97%
  • Oral: 3%
 

Appellant

  • Claimant: 93%
  • Minister: 7%
 

Language of Appeal

  • English: 75%
  • French: 25%
 

Research Administration

The absence of an experienced former member prompted the PPCAB to seek the expertise of an outside consultant. The individual then participated in the RAD New Member Training of June 2016 to acquire the same background as that of other new members in terms of the Division’s legal framework, weighing evidence, reasons writing, etc. A full biography of the consultant is set out in Appendix B. The consultant’s companion report is attached in Appendix A, an excerpt of which follows below to further describe the research administration:

  • The files that were provided for the evaluation constituted the complete record of each proceeding before the RPD as well as before the RAD. The files ranged from about 2 inches in thickness to as many as 8 or 9. In the interest of economy, and based on the Board’s experience with other decision-evaluation exercises, it was determined that the evaluator would not review the entire file for a case.
  • Instead, for the first 25 paper process matters the evaluator reviewed in detail the Basis of Claim form, the RPD decision, the Appellant’s Memorandum and the RAD decision. In those rare cases where the Minister participated, the Minister’s Memorandum and the reply Memorandum, if submitted, were also reviewed. This meant that the hearing before the RPD was not listened to, and that the administrative paperwork and the documentation on the RPD and RAD files was generally not considered in the evaluation.
  • In the interests of further economy, and because it was not providing important information for the evaluation beyond what was set out in the other materials, the Basis of Claim form was not reviewed after the first 25 evaluations had been completed.
  • The review process for the four appeal decisions where the RAD had held a hearing differed in three ways. The RAD hearing was listened to in its entirety; the RAD’s administrative documentation leading up to the hearing was reviewed; and a more extensive set of evaluation criteria was applied.

Limitations

The findings of this report are solely those of the evaluation team. Their observations are necessarily subjective in nature and do not lend themselves to firm conclusions on legal matters such as the correct application of the law, the weighing of the evidence, or the fairness of the proceedings from a natural justice perspective. Only a court reviewing the case can arrive at such conclusions. This report aims to provide a perspective to improve the Division’s performance overall.

2.0  Performance Results

2.1  Timely and Complete Pre-proceeding Readiness (oral appeals only)

Why measure this:

The groundwork for quality is set pre-hearing when the registry prepares a timely, organized and complete case docket and the member assimilates the facts and key issues of the case.

What was measured
 Average score out of 3 (target 2.0)Percentage of cases scoring at least 2.0
#1  The member ordered a hearing within 10 working daysFootnote 1 of being assigned to the file or after receiving new evidence. 1.525%
#2  The member prepared the list of relevant issues. 2.575%
#3  The registry notified all parties within 10 working daysFootnote 1 of reception of the list of issues.     3.0100%
#4  The registry notified the UNHCR within 10 working daysFootnote 1 of reception of the list of issues. 3.0100%
#5  The registry arranged all the requirements for the hearing to proceed.2.0100%
#6  The file contains all required information and documents.               2.575%
#7  The file was organized in a logical and standardized manner as established by the division. 3.0100%

What the numbers say:

  • The registry showed strong results for sending the notification to the parties, arranging the oral hearing, organizing a complete case docket, and notifying the UNHCR
  • 4 oral hearings were evaluated out of 9 that were conducted—all within the Central Region; this subsample is not sufficient to make generalizations nationally

Strengths

  • Preparing for hearing. The study found that the registry properly carried out pre-hearing arrangements to ensure that hearings would proceed without issue, including arranging the room and the correct-language interpreter. An examination of the case dockets found that they were properly organized and complete with all the required documentation.  

Opportunities

  • United Nations High Commissioner for Refugees (UNHCR). The registry sent a notification to the UNHCR of all oral hearings. However, a copy of the notification in each case was not placed on the file or entered in NOVA, as is expected for all outgoing correspondence.
  • Ordering a hearing. To contribute to overall timely decision-making, members are expected to order a hearing within 10 working days of being assigned to the file or after receiving new evidence. This was found not to be the case in 3 of the 4 oral appeals examined. In these 3 oral appeals, the members took an average of 28 working days. Four cases cannot be considered a meaningful sample, but this result may suggest the need for the Division to order an ad hoc NOVA report of a larger sample to determine if concern is warranted.
  • List of issues. In 3 of the 4 oral appeals, the member identified the issues to be addressed, an indication that the member had assimilated the contents of the file and was prepared for the hearing. However, the study found that in one of the hearings, the member only stated that the issue was a particular affidavit rather than specifying what issue in particular raised by the affidavit needed to be canvassed. In another hearing, the member considered, and made, an alternative finding on a determinative issue that was not raised as an issue in the notice of hearing.

2.2  Respectful Proceedings (paper and oral appeals)

Why measure this:

Individuals appearing before the IRB expect that they will be treated with sensitivity and respect. Any shortcoming in this regard potentially undermines tribunal integrity and public confidence.

What was measured
 Average score out of 3 (target 2.0)Percentage of cases scoring at least 2.0
#8  The member treats participants with sensitivity and respect. Oral appeals only2.5100%
#9  The member ensures parties have an opportunity to present and respond to evidence and to make representations. Paper and oral appeals2.169%
#10  The member identifies when the evidence has not adequately addressed an important issue and asks questions of clarification. Oral appeals only2.0100%
#11  Communications in the absence of a party is disclosed and summarized on the record. Paper and oral appeals2.788%
#12  Problems with interpretation are identified and addressed. Oral appeals onlyn/an/a

What the numbers say:

  • Scores generally met or exceeded the target
  • In #12, no hearings were found with interpretation problems

Strengths

  • Sensitivity and respect. Based on the recordings of the 4 oral appeals examined, the study found that the members displayed courtesy and sensitivity with minor deviations that did not alter the fundamental respectfulness of the proceeding. The study found instances that may be considered for future prevention, namely, neglecting to swear in the appellant and framing questions not in the interrogative but as statements, which appears to have confused the appellant.

Opportunities

  • Responding to adverse evidence. A respectful proceeding where the appeal is heard in writing is, for the purposes of this study, assessed by checklist questions #9 and #11—the parties have the opportunity to present and respond to evidence and to be informed of any ex-parte communication. The study found 7 paper appeals where the member made adverse findings on an issue that was not originally raised in the appellants’ memoranda or the RPD decision. In only 1 of the 7 appeals did the member write to the appellant to invite supplementary submissions on the new issue and consider those submissions in the reasons for decision. In another appeal, the member invited and received submissions but did not address the substance of those submissions, which dealt with a Federal Court decision that was contrary to the RAD’s decision on the issue. In the remaining 5 appeals, the member made adverse findings on a new issue without having first sought submissions from the appellant. It is noted, however, that in 2 of those 5 situations, the findings made were not on a determinative issue.

Recommendation

A communiqué could remind members of the importance of inviting parties to provide submissions on a new issue being considered.

2.3  Focused Proceedings (oral appeals only)

Why measure this:

Proceedings that are efficient and well managed create conditions for quality outcomes to emerge and support the IRB’s efforts to make the most effective use of its resources.

What was measured
 Average score out of 3 (target 2.0)Percentage of cases scoring at least 2.0
#13  The member begins the hearing by clarifying its purpose, confirms the focus on the list of issues. 2.0100%
#14  The member ensures the parties focus testimony and documentation on relevant issues. 2.0100%
#15  The hearing complete in the time allotted as indicated in the Notice to Appear. n/an/a
#16  The member's questioning is relevant to the issues set out in the hearing agenda (see #13) 1.767%
#17  The member's questioning is focused and organized. 2.375%
#18  The member manages challenging situations as they arise. 2.0100%
#19  During the course of the hearing, the member appropriately identifies opportunities to narrow the issues with consent of the parties. n/an/a
#20  If appropriate, the member further focuses, and narrows the issues for final representations. 3.0100%
#21  The member accommodates needs of vulnerable participants, including unaccompanied minors, to facilitate their presentation of evidence. n/an/a
#22  The member deals with oral applications made by parties. n/an/a
#23  The member identifies applicable legislation, regulation, rules or guidelines.n/an/a

What the numbers say:

  • Members scored highest for establishing and maintaining focus on the issues and managing challenging hearing-room situations
  • No results were found for #15, as the Notice to Appear does not contain an expected time allocation for the hearing
  • #19, #21, #22 and #23 were not applicable to the appeals examined.

Strengths

  • Issue identification. Members started all 4 hearings by stating the issues to be addressed. This action is scored a “2” for meeting the IRB’s acceptable standard. However, the study found an opportunity for members to go beyond this baseline by providing a more informative context that sets the stage for the hearing. For example, in one hearing, the member had cited an affidavit as the general issue and then proceeded with the hearing. A more effective introduction may have been to specify the issue about or raised by the affidavit.
  • Narrowing the issues. An effective means to focus a hearing occurs when the member directs the parties to narrow the examination or submissions once key issues have been addressed to the member’s satisfaction. In the only example found, the member was careful to ask counsel to make submissions only on the matters raised at the hearing and made it clear that the memorandum would be considered on the other matters. In all oral appeals examined, however, members rarely came across an opportunity to whittle away issues, as the hearings were already limited to specific issues.
  • Members’ questioning. Members generally asked questions that were relevant to the issues list. In one appeal, however, the member asked questions on a matter neither related to the issues list nor obviously flowing from it. The matter appeared relevant to the appeal itself but not to the specified purpose of the hearing. Most questioning was focused and organized. Examples that scored a “3” for exceeding IRB expectations were described by the evaluator as: “[T]horough preparation and a plan for the questioning and for being concise, clear and to the point.”  In the one case that scored “1” for needing improvement, the member did “not ask many questions and does not give the impression of a lot of preparation,” despite ultimately denying the appeal and making an adverse finding without questioning on the issue.

Opportunities

None.

2.4  Clear, Complete, Concise and Timely Decisions

2.4.1  Reasons are Complete (oral and paper appeals)

Why measure this:

The Supreme Court of Canada set the requirement for justifiability, intelligibility and transparency in a decision of an administrative tribunal.footnote 2 Through questions #24 to #34 on the following four pages, this study applies the Court’s requirement in the context of IRB decision-making.

What was measured
 Average score out of 3 (target 2.0)Percentage of cases scoring at least 2.0
#24  The member summarizes the main issues.1.433%
#25  The member addresses the positions of all parties.2.392%
#26  The member makes clear, unambiguous findings of fact.2.495%
#27  The member supports findings of fact with clear examples of evidence shown to be probative of these findings.2.387%
#28  The member addresses parties’ evidence that runs contrary to the member’s decision, and why certain evidence was preferred.2.288%
#29  The member identifies legislation, regulations, rules, Jurisprudential Guides, Chairperson’s Guidelines or persuasive decisions where appropriate.2.366%
#30  The member takes into account social and cultural contextual factors in assessing evidence. 1.950%

What the numbers say:

  • #25 and #26 are the strongest elements of the reasons—over 90% of appeals scored either a “2” or “3”
  • #24 is the weakest element of the reasons—33% of appeals were scored either a “2” or “3
Strengths
  • Addressing the parties’ position. In assessing #25 and #28, the study found that members were conscious to address the positions of the parties, particularly when the members preferred contrary evidence, in a detailed and thoughtful manner in the vast majority of decisions. About 90% of reasons were rated either a “2” or “3”. Typical observations of the evaluator are:
    • Addresses all points carefully.
    • The appellant’s arguments are considered on each issue.
    • Cites country documents that might be said to counter the RAD decision on state protection and shows why they are not determinative in this case.

    In contrast, 5 reasons were scored a “1”, and although these constituted a small minority, they stand to have lingering impact because the member did not acknowledge or assess the party’s evidence on a determinative issue. Typical observations of the evaluator are:

    • Does not assess the arguments put forward by the appellant on the determinative issues.
    • Having called for submissions on an issue, the RAD does not refer to the point advanced in them.
  • Clear, unambiguous findings. Reasons were found to have provided findings of fact in a clear and unambiguous manner. Against #26, 95% of reasons were scored a “2” or “3” and only 5% were scored a “1”. The evaluator notes that:
    • Findings are explained and justified; there is no ambiguity.
    • Is very clear throughout the decision, which is primarily about the new evidence.
  • References. Referencing applicable legislation, case law or an IRB policy instrument, such as a Guideline, informs the parties of the legal and policy authorities used by the member in reaching the decision. Though the subsample is small, all 4 oral appeals met expectations for consistent and concise citation of legislation and case law. Among paper appeals, results were divided between not citing any references when the occasion was appropriate for doing so, such as when the appellant or the RPD relied on case law, and citing references extensively throughout the reasons.
  • New evidence. In 19 of 22 appeals, submissions were received regarding new evidence, and the member applied the legal test for its admission. In 15 of the 19 cases, the members were clear and precise in explaining the analysis and the basis for the decision. No decision was made on new evidence in 3 of the 22 cases.
Opportunities
  • References. In other instances, the evaluator noted that members referred to the law at length without explaining the relevance of doing so to the appeal:
    • Some Members showed a tendency to write at length about the legal standards that define the role of the RAD or the legal standards for new evidence or oral hearings, or the law establishing legal standards for substantive matters such as credibility assessments or state protection. While the justification for this practice is outside the scope of this evaluation, it is of note that these analyses sometimes dealt with aspects of those standards that seemed to have no relevance to the decision in question. For example, a decision might discuss in some detail the appellate role of the RAD on credibility issues even though no such issues arose in the appeal, or discuss the legal standards for an aspect of the new evidence test that similarly was not relevant to the specific case.
    • These practices raise two issues. First, where the analysis addresses matters not relevant to the appeal it can give the parties the impression that the decision is a generic one rather than one focused on their particular circumstances and concerns. And, second, even where all issues dealt with in a legal analysis can be said to pertain to the appeal, writing at length on these matters requires the reader to work through a quite-complicated and lengthy legal analysis before getting to the decision on the merits. In most cases this will not contribute to the Appellant’s ability to understand the decision.
  • Summary of main issues. The study examined whether reasons started with an appropriate overview of the issues. This is considered important as it forms the basis to appreciate the grounds for decision to follow. A large majority (n=41) of the 61 paper and oral appeals reviewed did not have an adequate summary or any summary of the main issues. These decisions were scored a “1” against question #24 and typically dealt with the main issues in one of the following ways:
    • No summary or even a list of issues—the analysis starts with one of the issues.
    • A statement in the form of citing what the RPD had found.
    • A cursory statement of the appellant’s position.
    • A long synopsis of the grounds for appeal and the RPD decision.
    • No summary or identification of the issue on which the appeal is decided is mentioned until it is disposed of at the end of the decision.

    In contrast, 4 appeals or 7% scored a “3”. These reasons were noted for going beyond stating the issues to providing an informative summary of what the appeal is about.

    Recommendation

    A communiqué to members could be helpful in highlighting the importance of: (1) providing an appropriate summary of the main issues near the beginning of the reasons; (2) when referencing the law, clearly establishing its relevance to the matter at hand.

  • Social and cultural factors. Members were presented with appellant arguments in 10 different appeals raising social and cultural factors in the assessment of the evidence. Members scored a “3” for addressing those arguments in the reasons or a “1” for not dealing with those arguments. Examples of the contextual factors included the person’s education, relative sophistication and social isolation.

2.4.2  Reasons are Transparent and Intelligible (oral and paper appeals)

What was measured
 Average score out of 3 (target 2.0)Percentage of cases scoring at least 2.0
#31  The member uses plain language.1.874%
#32  The member gives appropriately clear and concise reasons.2.184%
#33  The reasons are easily understood and logically sequenced. 1.865%
#34  The reasons are duly economical taking into account the appeals’ complexities of the appeal and volume of evidence. 2.072%

What the numbers say:

  • The 61 reasons can be grouped into 1 of 3 tiers for transparency and intelligibility. Approximately:
    • 30% scored all “3s” or a mix of “2s” and “3s”
    • 40% scored all or mostly “2”
    • 30% scored all “1s” or a mix of “1s” and “2s”
  • Little consistency was found—individual members did not cluster in 1 of the 3 tiers; rather, they tended to scatter among two or more tiers.
Strengths
  • Plain language. Plain language is writing designed to enable the parties to understand the reasons quickly and easily. 74% of the reasons scored either a “2” or “3” for using relatively plain language and short sentences, though the reasons contained instances of long paragraphs, long sentences or awkward phrasing.
  • Clear and concise reasons. Over 9 in 10 reasons were clear and concise and half of these scored a “3”. These reasons were described as direct and to the point and consistently clear.
Opportunities
  • Understandability and sequencing. In assessing #33, the study rated 1 in 3 reasons a “1” for not being easily understood or logically sequenced. These reasons were found to lack a coherent, sequential structure. Analyses frequently moved through disparate issues making it complicated to follow and to understand the decision. These reasons stretched several pages but uniformly lacked headings or signposts to signal to the reader that the analysis is transitioning to another issue. Typical observations from the evaluator are:
    • The analysis proceeds issue by issue but does not use any headings in a multi-page, multi-issue analysis.
    • Under a single heading the decision works through the RPD decision, the [appellant’s] grounds and then four different aspects of the decision without clear signposts. While the decision is clear the structure does not assist in this regard.
    • The reasons are very hard to follow since they recite arguments and for the most part are unclear as to what is accepted. The dispositive findings, that the RPD reasons are inadequate, are stated as conclusions only.

    Reasons that were scored a “1” against #33 tended to score “1” in at least one other element, such as not using plain language, not being clear and concise, or not being economical, suggesting that when structure and sequencing go, intelligibility as a whole may suffer. However, 65% of reasons did meet or exceed the IRB standard. This included 9 reasons that scored a “3” for having dealt with complex and multiple issues, and organizing these with headings or transitions that guided the reader throughout the member’s logic path.

  • Reasons structure. Over 60% of reasons were structured around the issues with each considered in a discrete segment with minimal, if any, intermingling of issues, arguments and findings. This method was found to be particularly effective when headings and sub-headings were employed to guide the reader. Other methods used were the event-based, narrative and no-apparent structure (e.g. recitation of arguments). These lacked the same clarity and coherence of an issue-to-issue structure.
  • Economical reasons. 72% of the reasons were considered economical in relation to the complexity and volume of evidence of the appeal. These reasons were largely concise and careful not to dwell on extraneous matters. An opportunity to improve economy overall was identified from the 28% of reasons that tended to:
    • Devote several pages to outlining arguments, the role of the RAD or the RPD’s decision that were not germane to the basis for the decision.
    • Provide an extensive, even academic, discussion on case law (e.g. Raza) rather than a synopsis or explanation of how it applies to the decision.
    • Summarize the RPD decision and arguments only to repeat them later in the analysis section of the reasons.
Recommendation

The Division may wish to consider the above findings on reasons structure and economy when considering professional development programming.

2.4.3  Supplementary Questions

The evaluator developed and tested 7 supplementary questions on reasons completeness, transparency and intelligibility that delve deeper into aspects of quality not directly addressed by one of the checklist questions. Reasons were assessed against these supplementary questions along the same numerical rating scales. Results are provided in the table below.

Supplementary questions
 Average score out of 3 (target 2.0)Percentage of cases scoring at least 2.0
S1  The reasons apply the appropriate tests for the admission of new evidence.2.785%
S2  The decision considers all relevant issues from the record as appropriate.2.6100%
S3  The reasons are structured around the issues (i.e. they are issues-driven).2.262%
S4  The reasons show a clear logic path to the result.2.049%
S5  The reasons are likely to explain the result to the subject of the appeal.2.889%
S6  The reasons appear to provide useful guidance to the RPD and other readers (e.g. on CanLII)1.967%
S7  The member conducted an independent assessment of the claim rather than a review of errors made by the RPD. 2.892%
Findings
  • In question S1, reasons scored a high average of 2.7 out of 3 for applying and explaining the appropriate test for the admission of new evidence.
  • In question S2, members similarly scored a high 2.6 for raising an issue not raised by the appellant. This goes to the thoroughness of the member’s review of the file rather than to procedural fairness, which is addressed by checklist questions #24 and #28.
  • Questions S3 and S4 examine reasons structure. The average scores met the target, but this hides the significant percentage of reasons that scored below 2.0. Reasons structure is discussed in detail in Subsection 2.4.2 above.
  • Under question S5, about 9 in 10 reasons did more than simply support the result and instead explained to the appellant how and why they lost or won their appeal. However, under S6, reasons were found to be less effective as useful guidance to the RPD if the matter were returned or to other readers, such on CanLII, with similar appeal issues.
  • Finally, under S7, the study found that in more than 9 in 10 appeals, the member carried out an independent assessment of the claim rather than a review of errors by: an independent analysis of credibility; references to the recording, submissions or additional documents from the record; or the avoidance of terms like “the RPD reasonably found” or “it was open to the RPD to find”.

These supplementary questions do not form part of the checklist proper or the averaging of scores. They are intended to complement the study by filling in analytical gaps. The efficacy of these questions will be considered for the next annual measurement of quality.

2.4.4  Decisions are Rendered as Soon as Practicable

Why measure this:

A timely decision contributes to resolving uncertainties among the parties and to meeting the IRB’s mission.

What was measured
 National Average (Days)Percentage of cases within 90 days
#35  Among all 249 paper appeals decided during March-April 2016, the number of days from assigning the file to the decision, and number of members. 30n/a
#36  Among all 249 paper appeals decided during March-April 2016, the number of days from perfection of the appeal to assignment of the file, and number of members.8663%

What the numbers say:

Average Days to Assign and Decide an Appeal
Average Days to Assign and Decide an Appeal
[Alternate format]
Average Days to Assign and Decide an Appeal
 Number of DaysNumber of Members
Perfection to AssignedAssigned to ReasonsTotal
National863011624
Western157231803
Central43277014
Eastern171382097
  • The national average time to decide an appeal is 116 days from perfection of the appeal
  • The fastest time was in the Central Region at 70 days and the slowest in the Eastern Region at 209 days
  • Where the size of the member complement is small (Western, Eastern), the time to assign the file is longer; where complement size is larger (Central, National), the time to assign is commensurately faster
Observations
  • Language of appeal. Significant variability was found in the number of days taken to assign the file to a member. Assigning the file takes an average of 86 days nationally, and reaches 171 days in the Eastern Region. Further analysis found a language correlation—all French appeals (which were heard exclusively in the Eastern Region) took 197 days to be assigned to a member compared to 85 days for English appeals in that region, meaning that a French-language appellant could expect the file to be processed 232% longer. However, once the French-language file was assigned to a member, the decision arrived within a relatively standard 32 days. This correlation does not imply causation. Other factors outside the scope of this study—such as the number of governor-in-council appointments—may account for delays in assigning the file.
  • Once a member is assigned the file, the matter is decided in an average timeframe of 30 days, about 1 week faster in the Western Region and 1 week longer in the Eastern Region.
  • When the Minister is a party, members take an average of 32 days to decide the paper appeal, virtually the same time as the 30 days without the Minister.
  • The oral appeals decided during this period took 29 days to be assigned and then a further 160 days to be decided; oral appeals are not subject to a regulatory time limit for decision.

3.0  Summary

In this inaugural study of quality, the RAD achieved a global average score of 2.1 out of 3 and meets the IRB’s acceptable standard of quality in decision-making. Broken down, the study found wide variability of practices and performance. Obvious strengths included: hearings that were fundamentally respectful, appellants who had the opportunity to present and respond to evidence, reasons that were clear and concise and whose findings were unambiguous, and explanations as to why certain evidence was favoured over others.

In terms of opportunities, apart from the likely need for more French-speaking members, the study found no critical concerns likely to require intensive remedy. Reasons could benefit from: a more informative summary, the use of section headings or other transitions, and the pruning of legal references to what is relevant to the decision.

Not to be lost is the registry’s role in engendering quality outcomes. The registry arranged hearings that proceeded without issue and delivered all of its case dockets in a complete and organized manner, with the exception of not having retained copies of correspondence to the UNHCR.

Finally, the methodology for testing quality was also the subject of examination. This study served as a test run of the checklist questions and the method for reviewing paper appeals by examining select documents from the record. In conducting a study of diverse appeal types and member approaches, there was a conscious effort by the evaluation team to standardize the interpretation of the questions and the rating scales. Some questions were found to be less relevant. In others, the rating scale switched from an ordinal 1-2-3 scale to a yes/no scale. To help round out the analysis, the consultant developed seven supplementary questions plus extensive commentary on the current checklist to improve its future usability for the RAD and, potentially, other divisions. Further analysis of these comments and another cycle of quality measurement can be expected before settling on an optimum methodology.

Appendix A – Evaluator’s Companion Report

Report on the 2016 Decision Quality Evaluation: Refugee Appeal Division,  Immigration and Refugee Board of Canada

Prepared by: Doug Ewart
September, 2016

Introduction

This Report provides an overview of the 2016 Decision Quality Evaluation of the Refugee Appeal Division of the Immigration and Refugee Board of Canada (the RAD). Part I outlines the evaluation process and offers some context for and general commentary on the evaluations of individual decisions undertaken for this project. Part II sets out the interpretative guide that was developed for this evaluation. It then goes on to provide a relatively detailed assessment of the evaluation instrument that was used and to put forward a number of specific suggestions for the Board’s consideration for future evaluations.

Part I

The Project

The purpose of this project was to provide an independent and objective assessment of a sample of RAD decisions using a set of evaluative criteria developed by the Board.

Although other Divisions of the Board have been assessed in this fashion for a number of years, this was the first assessment of the more-recently created RAD. It was also the first to apply the Board’s evaluation criteria to an appellate process that is overwhelmingly based on a review of a file rather than on a hearing. As a result, the evaluator was asked to submit comments along the way about the applicability of the criteria, developed in oral hearing contexts, to this exercise.

It is to be stressed the evaluation was not designed to, and at no time did, consider whether a RAD decision seemed correct or reasonable. The evaluation criteria focus only on how an appeal was conducted and how the decision was written. Ratings and comments were provided without regard to any views the evaluator may have had on the result in a given case.

Sixty-one RAD decisions were evaluated over the course of the project. Four arose from oral hearings and the balance from exclusively paper proceedings. The geographic and linguistic distributions of the paper process decisions that were evaluated closely tracked the distribution of the RAD’s caseload among the Vancouver, Toronto and Montreal offices, and between English and French proceedings. All four of the oral hearing matters arose from the Toronto office and were English language proceedings.

It is of note that towards to end of the time period from which the evaluated decisions arose the Federal Court of Appeal released two very significant decisions affecting the role of the RAD. One provided important clarifications of the scope of an appeal at the RAD, and the other clarified the approach to be taken to new evidence put forward in an appeal.

The evaluations took into account the fact that the legal expectations of Board Members accordingly changed during the review period.

The Process

This evaluation proceeded in three phases, on a part time basis, from early June through to September 2016: 1) attendance at the training program for new RAD Members; 2) the evaluation of 61 RAD decisions, including the development, part way through the process, of an interpretative guide to the existing evaluation criteria and the development of additional criteria for the evaluations; and 3) the preparation of this final report.

The evaluations of the paper process decisions used a standard set of 13 criteria (20 after the additional criteria were added), each with either a three-level rating scale or a Yes-No assessment, depending on the nature of the criterion in question. As well, narrative comments could be, and frequently were, added to the numeric assessment of individual criteria. These comments were used to explain the rating or the relationship between or among the various ratings for different elements of a specific decision, or to note anomalies or issues of particular interest arising from the decision in question.

Some of the criteria provided for the evaluation proved challenging. Accordingly, after the first 9 evaluations had been completed an interpretative guide to the criteria was developed to address those challenges. It was used from then on, and the initial evaluations were adjusted as necessary in light of the new interpretations. As well, seven new evaluation criteria were added to the assessment process on a pilot project basis and were applied to all of the decisions evaluated.

Draft evaluations were submitted at intervals to provide the opportunity for feedback on the approach adopted, including the nature and level of detail in narrative comments.

Scope of Review

The files that were provided for the evaluation constituted the complete record of each proceeding before the RPD as well as before the RAD. The files ranged from about 2 inches in thickness to as many as 8 or 9. In the interest of economy, and based on the Board’s experience with other decision-evaluation exercises, it was determined that the evaluator would not review the entire file for a case.

Instead, for the first 25 paper process matters the evaluator reviewed in detail the Basis of Claim form, the RPD decision, the Appellant’s Memorandum and the RAD decision. In those rare cases where the Minister participated, the Minister’s Memorandum and the reply Memorandum, if submitted, were also reviewed. This meant that the hearing before the RPD was not listened to, and that the administrative paperwork and the documentation on the RPD and RAD files was generally not considered in the evaluation.

In the interests of further economy, and because it was not providing important information for the evaluation beyond what was set out in the other materials, the Basis of Claim form was not reviewed after the first 25 evaluations had been completed.

The review process for the four appeal decisions where the RAD had held a hearing differed in three ways. The RAD hearing was listened to in its entirely; the RAD’s administrative documentation leading up to the hearing was reviewed; and a more extensive set of evaluation criteria was applied.

Observations on the Process
1) Assistance of the Policy, Planning and Corporate Affairs Branch

Throughout the process the evaluator benefited significantly from the advice and support of the Policy, Planning and Corporate Affairs Branch of the Board. First, and most obviously, they developed the approach, and provided the criteria that were so important to the success of the exercise. As well, they arranged for participation in the RAD training that turned out to be central to a full appreciation of the context behind many of the assessment criteria.

Officials in the Branch also provided a very helpful orientation to the checklist and the evaluation process overall, and were always available to assist with any questions as the evaluations proceeded. As noted above, they were also open to the creation and use of an interpretative guide to the assessment criteria to reflect the specialized features of the RAD appellate process and supported the addition of seven additional criteria on a pilot project basis.

2) Value of the Criteria

While the initial criteria required some interpretation to take into account the unique aspects of the RAD process, and to add some clarity where criteria appeared to overlap, they proved to be essential to the evaluation process. They focused and disciplined the assessment of each decision.

Indeed, the initial impression created by a review of a file and a RAD decision not infrequently changed when specific criteria were applied to the decision. This very much parallels what adjudicators and judges have often stated: the process of writing a decision can change the result the decision maker had in mind after hearing the matter but before starting to write the decision. The use of specific criteria and a rating scale supported a reasoned assessment of each decision and accordingly played an important role in increasing the rigour of the assessment process.

As well, the ability to add written comments to the numeric assessments proved to be extremely valuable. The discipline of adding comments to explain or provide context for a numerical rating frequently had the effect of altering the rating. Thus, in addition to what value the comments may have when considered on their own, they clearly produced more accurate ratings on the numeric indicators.

3) Value of the Interpretative Guide

The guide proved to be very helpful, and sometimes essential, to providing consistent, non-overlapping applications of the various criteria. As well, once it was available as a reference the assessment of each file took appreciably less time.

4) Order of Review

For the initial assessments the evaluator reviewed the documents in chronological order, starting with the Basis of Claim form and ending with the RAD decision. This followed the approach that a senior RAD Member had indicated was applied to the appellate process itself. Part way through the evaluation process the order was reversed to determine if such an approach would contribute to a faster review process without compromising the quality of the evaluations. As it appeared to do so the approach was maintained throughout the balance of the project.

5) Numeric Ratings

While, as noted above, the use of a numeric rating system was important to the evaluation process, the ratings scales sometimes proved problematic. As is discussed in more detail in Part II, there were times when, because of the nature of a particular decision, almost any rating on the 3-point scale could be misleading. There were also criteria for which a Yes-No assessment seemed less helpful than the 3-point rating scale.

As well, the application of the 3-point scale to the criteria as currently worded can induce more subjectivity into the evaluation than is perhaps desirable. Basing a numeric assessment of criteria such as ‘plain’ or ‘easily understood’ on non-specific terms such as ‘acceptable achievement’ or ‘above-average’ assumes a shared understanding of these terms between the evaluator and the readers of the evaluation. It may also make it difficult to take into account changes in evaluators over the years.

As a result, the Board may wish to consider developing specific assessment standards for each criterion. A number of these are suggested in Part II of this Report.

As well, the evaluator was advised at the outset that the approach to numeric ratings was to be a relatively generous one, and in particular that there needed to be fairly serious flaws before a rating criterion could be said not to have been met. While this is a matter for the Board’s discretion, it could be that aggregated conclusions based on such an approach may fail to identify areas where, while decisions are not seriously flawed, improvements are very much to be desired.

6) Efficiency

In all, having clear interpretations of the criteria used in the evaluations, starting the review with the RAD decision, and eliminating the review of the Basis of Claim form substantially expedited each assessment.

Some Very General Observations on Decisions

It is understood that the Board will use an aggregated assessment of the 61 evaluations as the basis to draw conclusions about the quality of RAD decisions overall. It follows that un-aggregated, general observations have a limited role in assisting the Board move towards its objective of high quality decisions.

Nonetheless, in the event that they may be useful to the Board, a few general observations are set out here.

1) Quality in Relation to Available Time

The evaluations for this project were completed on the basis of the record and the decision alone. There was accordingly no basis to take into account contextual factors such as the time available to write a decision.

Since there can be a tendency in reviewing evaluations to focus on areas needing improvement, it is important that concerns be considered in relation to the workload expectations placed on Members, and accordingly the amount of time they have to write, and especially to re-write, their decisions. Given the expectation that, on average, Members will review a file and complete their decision in two working days certain identified shortcomings in decision writing could well be said to be an understandable consequence of that expectation.

2) Extensive Analyses of Legal Issues

Some Members showed a tendency to write at length about the legal standards that define the role of the RAD or the legal standards for new evidence or oral hearings, or the law establishing legal standards for substantive matters such as credibility assessments or state protection. While the justification for this practice is outside the scope of this evaluation, it is of note that these analyses sometimes dealt with aspects of those standards that seemed to have no relevance to the decision in question. For example, a decision might discuss in some detail the appellate role of the RAD on credibility issues even though no such issues arose in the appeal, or discuss the legal standards for an aspect of the new evidence test that similarly was not relevant to the specific case.

These practices raise two issues. First, where the analysis addresses matters not relevant to the appeal it can give the parties the impression that the decision is a generic one rather than one focused on their particular circumstances and concerns. And, second, even where all issues dealt with in a legal analysis can be said to pertain to the appeal, writing at length on these matters requires the reader to work through a quite-complicated and lengthy legal analysis before getting to the decision on the merits. In most cases this will not contribute to the Appellant’s ability to understand the decision.

3) Use of Preliminary Overviews

Moreso in the Montreal decisions than the Toronto decisions reviewed there was a tendency to start decisions with separate, and sometimes relatively lengthy, overviews of the RPD decision, the Appellant’s arguments and/or the facts. In some instances little of this was used in discussing the merits of the appeal; in others, significant parts of it were repeated in that analysis. Rarely, if ever, did this practice make the decision easier to comprehend.

4) Demonstrating the Deliberative Process

In some instances it appeared that a Member had written a significant amount of text in coming to a decision that then turned out not to be necessary to the decision that was made. They then left the text in the decision rather than editing it out of the final draft.

Similarly, some decisions appeared to be written to show the deliberative process that was gone through to get to the decision rather than being written to explain and support the decision that had been made. This tendency to write towards a decision to be made rather than from decision that has been made often added extraneous material and made the path to the result more difficult to follow.

It should be stressed that these are not serious flaws in decisions. They do not reflect concerns about the completeness of the reasoning process, but instead reflect writing styles that can make it more difficult for a reader, especially a lay reader, to get through a decision and readily appreciate the bases on which it was made.

Other Matters
1) Training on the evaluation criteria

Based on attendance at the RAD training program for new Members, and a review of the materials provided to Members in the sessions on decision-writing, it appears that the criteria used in Board evaluations of decisions are not provided to, or at least not taught to, the Members. If this is not addressed elsewhere, then given that the criteria have been approved by the Board as formal indicators of decision quality there would seem to be some logic in including the criteria in the training for Members.

This is not to say that similar aspects of decision-writing are neglected in the training, but only to suggest that making clear to the Members the Board’s formal indicia of decision quality could be very helpful to advancing that goal.

2) Substituting Decisions and Providing Directions

There appeared to be limited use of the RAD’s power to substitute a decision or to provide directions for a rehearing. The former is of course determined to a considerable extent by the state of the record of the RPD proceedings, but there could still be value in measuring whether the use of the power changes as the RAD develops over time.

Similarly, there was very limited use of the power to provide directions for a rehearing at the RPD. This would seem to be a missed opportunity to add value to the appellate process, and efficiency at the rehearing, by providing expert guidance based on a review of the entire record and of the specific errors found at first instance. It too might be something that the Board would like to measure in future evaluations.

3) New Members

It appears that the recently-appointed RAD Members differ from the existing complement of Members in not having previously served as RPD Members. The coming year accordingly provides an opportunity to assess whether there are any statistically significant differences in decisions based on that distinction.

Conclusion

To paraphrase a well-known author, all sound decisions are sound in the same way, but all flawed decisions are flawed in their own way.

This reality fully validates the detailed assessment criteria used by the Board in this important exercise. This approach moves beyond impressionistic assessments. It not only disciplines the evaluation process but as well provides the Board with quite specific information on areas where increased training and or new standards will produce significant quality enhancements.

The Board is to be congratulated for undertaking this kind of review, something that is regrettably rare in other parts of the administrative justice world. It was a pleasure to have the opportunity to take part in this exercise, and it is hoped that the results of the review, as well as the specific suggestions for future such exercises, are of value to the Board.

Part II

Part II of this Report begins by setting out the interpretative guide developed for this evaluation. It then goes on, in a separate section, to review and offer comments on the criteria used in the evaluation, both those initially provided (Question #1 to Question #34) and those added as the process unfurled (Question S1 to Question S7). [Note: The separate section of Part II was submitted to the IRB and included a detailed analysis of the checklist. Because of the technical nature of that analysis it is not included here and instead will be brought forward when updates to the methodology are considered.]

Interpretative Guide to the Checklist for One Party (Appellant) Appeals - July 18, 2016

Outcome

‘Returned with instructions’ means substantive directions beyond being heard by a different panel. May differ from disposition record on the file.

 

Question #9

Assesses how the Member proceeds when raising a new issue from the record or where specialized knowledge is used (see RAD Rule 24) or where a change in country conditions is relied upon. Per Legal Services, it is a breach of natural justice to deal with an issue that was not addressed in the RPD decision or the appellant’s memorandum, even if fully canvassed at the RPD hearing, without giving notice and an opportunity to make submissions. [Note that Question S2 addresses whether a Member raises a new issue; i.e. goes to the thoroughness of the review rather than to procedural fairness].

 

Question #24

A bare statement of the issues is treated as falling short of the standard and rated as 1, with a basic summary as 2 and a particularly helpful summary as 3. An example of a bare statement of the issues would be: “The issues in this appeal are credibility and whether there is an IFA”.

 

Question #25

Means that the Member addresses the parties’ arguments positions, with the scale assessing whether the thoroughness with which they are addressed matches their relevance to the outcome. The question does not assess whether the findings of the RPD are addressed.

 

Question #26

Note that credibility overall is treated as a fact for the purposes of this question and Question #6.

 

Question #27

Note that the issue is findings of fact, not findings of error.

 

Question #28

This will apply less frequently where the appeal is allowed.

 

Question #29

Includes case law: answer in the negative when the only case law referred to is that concerning the RAD’s jurisdiction.

On guidelines, the issue is whether the standard is referred to and its application explained, not whether it is formally cited.

Note that there are no Jurisprudential Guides or persuasive decisions for RAD.

 

Question #30

The core issue is whether potential biases concerning socio-cultural background were adverted to and filtered out from the assessment of how the claimant’s or another witness’s evidence presented at the RPD.

Does not include taking into account the cultural and social conditions of a country.

 

Question #31

This does not involve a formal plain language assessment. A decision will be rated 2 unless it used challenging, technical or obscure language or was written in a particularly complex structure.

 

Question #32

To avoid an overlap with Question #10, ‘clear’ will address whether the reasoning, as opposed to the language or structure used, is clear: this question assesses whether, despite some difficult language or structure, the reasons explain the conclusion(s).

To avoid an overlap with Question #13, ‘conciseness’ is treated as referring to the length of the analysis of each issue as opposed to the length of the reasons overall.

 

Question #33

Given that ‘easily understood’ and ‘logically-sequenced’ could be seen to overlap with ‘clear’ in Question #11, the approach will assess whether the reasons are structured to be easily understood: i.e., easily understood because logically structured and ideally sign-posted throughout.

 

Question #34

Given the potential overlap with Question #11 regarding conciseness, the assessment will look at the length of the reasons overall and whether matters not necessary to support or explain the decision were dealt with.

 

Question S1

Will also consider the application of the appropriate tests for an oral hearing if new evidence is admitted.

 

Question S2

The assessment will look for whether the member finds a new issue in the record. [Compare to Question #1]

 

Question S3

The assessment will look for a clear articulation of the distinct issues in the appeal and for reasons that address all relevant considerations and information under each distinct issue in question---evidence, law, submissions, RPD findings and RAD conclusions.

 

Question S4

The assessment looks for focused reasons that demonstrate and ideally explain a clear line between each issue and how it was resolved. A decision could meet Question S3 in that it provides an issue-by-issue analysis but not meet this standard.

 

Question S5

The assessment will consider whether the reasons do more than support the result and instead explain to the Appellant how and why they lost or won,

The standard is applied less strictly in cases where the Appellant won their appeal.

 

Question S6

The assessment will consider whether the reasons provide information that is likely to assist the RPD to determine the matter, if returned, and/or to assist others in resolving similar issues once the decision is posted on CanLII.

 

Question S7

The assessment will consider whether this is demonstrated by independent engagement with the evidence; by an independent analysis of credibility; by references to the recording, submissions or additional documents from the record; and by the avoidance of terms like “the RPD reasonably found’ or ‘it was open to the RPD to find’. It may also be expressly stated as the approach used.

Note that the FCA decision conclusively deciding the RAD’s role was released on March 29, 2016, in the midst of the time period from which decisions for this evaluation were drawn. As a result, RAD decisions may reasonably vary on this issue depending on whether released before or after the FCA decision.

 

Appendix B – Biography of Doug Ewart, Evaluator

Doug holds an LL.B. from Osgoode Hall Law School and an LL.M. from the London School of Economics. He was the head of the Ontario Attorney General’s Policy Development Division for thirteen years, having previously combined policy work for the Ministry with an extensive criminal law appellate practice before the Court of Appeal for Ontario and the Supreme Court of Canada. He has published three legal texts.

He also spent some thirteen years in the federal public service on an executive interchange to the Government of Canada. There, his responsibilities included acting as Senior General Counsel and Senior Advisor to the Deputy Minister of Justice, and then as Senior Advisor to Deputy Ministers at the Privy Council Office and the Department of Indian Residential Schools Resolutions Canada.

In the last-noted role he was credited with being the principal architect of, and played an extensive role in implementing, a national dispute resolution process for individual claims of sexual and physical abuse of aboriginal children who attended residential schools. To date it has been successfully used to resolve almost 40,000 of those sensitive and complex claims.

His administrative justice experience also includes serving as the Executive Lead for the transformation of the Human Rights Tribunal of Ontario into a direct access, active-adjudication body. This involved the design of an entirely new process for receiving and determining human rights claims and serving as the operational lead for the for the start-up phase of the new tribunal.

As well, he has served as Special Advisor to the Executive Chair of Ontario’s first cluster of adjudicative tribunals, working to develop and implement this new approach to tribunal efficiency and effectiveness for five environment and lands tribunals. He then went on to participate in developing the policy framework for the clustering of seven social justice tribunals.

His other experience includes acting as the policy lead for the Review of the Roots of Youth Violence established by the Premier of Ontario, and being the lead drafter of its report.

Appendix C - Combined Oral-Paper Checklist

Timely and Complete Pre-proceeding Readiness

  • #1  The member ordered a hearing within 10 working days of being assigned to the file or after receiving new evidence.
  • #2  The member prepared the list of relevant issues.
  • #3  The registry notified all parties within 10 working days of reception of the list of issues.    
  • #4  The registry notified the UNHCR within 10 working days of reception of the list of issues.
  • #5  The registry arranged all the requirements for the hearing to proceed.
  • #6  The file contains all required information and documents.
  • #7  The file was organized in a logical and standardized manner as established by the division.

Respectful Proceedings

  • #8  The member treats participants with sensitivity and respect.
  • #9  The member ensures parties have an opportunity to present and respond to evidence and to make representations.
  • #10  The member identifies when the evidence has not adequately addressed an important issue and asks questions of clarification.
  • #11  Communications in the absence of a party is disclosed and summarized on the record.
  • #12  Problems with interpretation are identified and addressed.

Focused Proceedings

  • #13  The member begins the hearing by clarifying its purpose, confirms the focus on the list of issues.
  • #14  The member ensures the parties focus testimony and documentation on relevant issues.
  • #15  The hearing complete in the time allotted as indicated in the Notice to Appear.
  • #16  The member's questioning is relevant to the issues set out in the hearing agenda (see #13)
  • #17  The member's questioning is focused and organized.
  • #18  The member manages challenging situations as they arise.
  • #19  During the course of the hearing, the member appropriately identifies opportunities to narrow the issues with consent of the parties.
  • #20  If appropriate, the member further focuses, and narrows the issues for final representations.
  • #21  The member accommodates needs of vulnerable participants, including unaccompanied minors, to facilitate their presentation of evidence.
  • #22  The member deals with oral applications made by parties.
  • #23  The member identifies applicable legislation, regulation, rules or guidelines.

Reasons are Complete

  • #24  The member summarizes the main issues.
  • #25  The member addresses the positions of all parties.
  • #26  The member makes clear, unambiguous findings of fact.
  • #27  The member supports findings of fact with clear examples of evidence shown to be probative of these findings.
  • #28  The member addresses parties’ evidence that runs contrary to the member’s decision, and why certain evidence was preferred.
  • #29  The member identifies legislation, regulations, rules, Jurisprudential Guides, Chairperson’s Guidelines or persuasive decisions where appropriate.
  • #30  The member takes into account social and cultural contextual factors in assessing evidence.

Reasons are Transparent and Intelligible

  • #31  The member uses plain language.
  • #32  The member gives appropriately clear and concise reasons.
  • #33  The reasons are easily understood and logically sequenced.
  • #34  The reasons are duly economical taking into account the appeals’ complexities of the appeal and volume of evidence.

Reasons are Timely

  • #35  The number of days from assigning the file to the decision, and number of members.
  • #36  The number of days from perfection of the appeal to assignment of the file, and number of members.

Supplementary Questions

  • S1  The reasons apply the appropriate tests for the admission of new evidence.
  • S2  The decision considers all relevant issues from the record as appropriate.
  • S3  The reasons are structured around the issues (i.e. they are issues-driven).
  • S4  The reasons show a clear logic path to the result.
  • S5  The reasons are likely to explain the result to the subject of the appeal.
  • S6  The reasons appear to provide useful guidance to the RPD and other readers (e.g. on CanLII)
  • S7  The member conducted an independent assessment of the claim rather than a review of errors made by the RPD.

Appendix D – Rating Scale

  1. Needs Improvement: The quality requirement was not met. The evidence showed one or more key instances where the proceeding or reasons would have markedly benefited had this requirement been met. There may have been an effort to apply the requirement but the level of achievement fell short of expectations.
  2. Meets Expectations: This is a level of acceptable achievement. On balance, the decision-maker satisfied this quality requirement though there is margin for minor improvement.
  3. Above Expectations: This is a level of consistent, above-average achievement. The evidence shows a grasp of the quality requirement and an understanding of its importance to a high quality proceeding or decision, as the case may be.

Footnotes

Footnote 1

The 10-day delay in questions #1, #3 and #4 are IRB administrative delays and not a legislative delay.

Return to footnote 1 referrer

Footnote 2

Dunsmuir v. New Brunswick, 2008 SCC 9.

Return to footnote 2 referrer