Go to top of page

Part 3: Evaluation tool instructions

Part Three – Evaluation Tool Instructions

How to use the tool

The steps to follow when evaluating a role using the APS Role Evaluation Tool are:

  1. Understand the role –collect all relevant information regarding the role.
  2. Analyse the role – consider the frequency and significance of each task.
  3. Assess the role – read and select the appropriate work value description (for each factor) and assign the corresponding points.
  4. Record the results of the evaluation – provide sufficient evidence to support the selection and ensure the evidence provided fully meets each descriptor (a summary record is provided as part of the tool).
  5. Determine the proposed classification – combine the scores and use the table to determine the preliminary assessment outcome based on the total score.
  6. Validate the evaluation – compare the preliminary assessment against the corresponding work level standards and confirm alignment.
  7. Finalise the evaluation –submit to delegate for approval and to assign the relevant/appropriate approved classification.

Evaluation factors

Evidence about a role is analysed against a set of factors which are relevant to all jobs within the APS. These are aligned to five characteristics identified in the APS work level standards (APS Level and Executive Level classifications). There are nine evaluation factors. Each is explained in the APS Role Evaluation Tool.

  1. Knowledge Application
  2. Accountability
  3. Scope and Complexity
  4. Guidance
  5. Decision-making
  6. Problem Solving
  7. Contacts and Relationships
  8. Negotiation and Cooperation
  9. Management Responsibility /Resource Accountability

Assigning a work value description

Each of the nine evaluation factors contains an overarching definition and a range of work value descriptions. The work value descriptions correspond directly to the APS work level standards and expectations for the APS Level and Executive Level classifications.

Using the job-related information/evidence obtained in the first part of the role evaluation process, users of the tool need to read the descriptions for each job evaluation factor and analyse the information gathered against the work value descriptors. The assessor then needs to consider which description best describes the role expectations.

To determine the most appropriate description of the role, the assessor must maintain a balanced view and should compare descriptions which correspond with lower and higher levels to determine the most appropriate one. The selected level should reflect the role having concentrated on the norm.

A role must meet the full intent of a description for that description to be selected. If the role exceeds a particular description, but fails to meet the full intent of the description of the next highest level, then the lower description should be selected.

It is also important to be clear about which job evaluation factor is most appropriate for each specific component of the role.To avoid overstating the overall value of the input, the assessor must ensure value for the same input is not attributed to more than one job evaluation factor (e.g. supervising staff should only be attributed to the ‘Management/Resource’ job evaluation factor; it should not be attributed to the ‘Contacts and Relationships’ job evaluation factor as well).

Record the results

The analysis determining the outcome of the role evaluation should be recorded using the Role Evaluation Tool Summary Record.The record can be used for reference for future evaluations and to support decision making and record keeping requirements.

Importantly, the assessor must document the rationale for the selection of each job factor description, citing role specific responsibilities relating to the particular factor to validate the selection; and note the corresponding points. The analysis should not just re-state the work value description but should demonstrate particular tasks or responsibilities the role is expected to perform and describe how these activities relate to the job evaluation factor and associated work value description.


The tool includes a scale for scoring roles, based on the work value descriptions selected. Points correspond with descriptions and the work value for each job evaluation factor. The combined (total) score indicates the proposed job classification level.

Roles may score low against one or more factors and high against others, reflecting the diversity of the role being assessed. For example:

  • a professional/specialist role may score highly against the ‘Knowledge Application’ factor, but lower against the ‘Management Responsibility / Resource Accountability’ factor.

Roles may also score anywhere within a range for a job classification level, reflecting the broad range of work value within each classification level.

Borderline roles

Some roles will score within the range for a proposed classification level. However, some roles may score on the ‘borderline’, i.e. the total score is just below the maximum or just above the minimum score for a particular classification.

If this occurs, it is necessary to revisit the evaluation to ensure that all the relevant information has been gathered and considered. It may be necessary to obtain supplementary information and/or undertake another evaluation, perhaps by another suitable person taking on the role of assessor. If the role continues to be on the borderline of a point range it may suggest there is a need to consider job design (e.g. re-assign duties to enhance or to better balance a role and classification). Noting that there may be occasions when a role will legitimately align with the higher or lower range of a particular classification.

In these circumstances it is important to be specific about the particular aspects of the job which are assessed as borderline. Role evaluation should not look at roles in isolation, and if there appears to be some ‘dilution’ of a role then a recommendation could be made that certain tasks are allocated to other roles at that classification or a different classification altogether.

If the role continues to be at the top of the range then the same principle applies. Role analysis should look broadly at the role, and job re-design spread across a few roles may be the better outcome for the agency as a means of balancing classification and a more efficient use of resources.

Validate the evaluation

The preliminary assessment is determined by combining the points for each job evaluation factor to arrive at a total score and identify the corresponding job classification level. The evaluation score should not be treated as the sole authority of a role’s classification.

The preliminary assessment should be reviewed to verify accuracy of the assessment. The evaluation must then be compared against the work level standards to check that duties and expectations are appropriately aligned and the intent of the work level standards is an accurate reflection of the proposed decision regarding job classification level (bearing in mind there may be some overlap of the functions of a role).

If the preliminary evaluation does not accord with the work level standard for the proposed classification level, it may suggest there is a need to consider revising the role expectations.

Alternatively, there may be sufficient justification to support why the allocated classification level differs from preliminary assessment. This should be documented on the evaluation summary record.

Finalise the evaluation

Once a comparison has been done between the content of the evaluation summary record (or the preliminary assessment outcome) and the work level standards, a decision can then be made on the classification level for the role.

When the assessor is satisfied that the evaluation is valid (i.e. satisfied that there is an appropriate degree of correspondence with the job and the work level standards or justification supports an alternative decision), the evaluation can be finalised.

All parts of the Role Evaluation Tool should be completed and supporting documentation attached to the record as evidence to support the evaluation outcome. The assessor can then submit the evaluation record to the appropriate decision-maker (delegate) for consideration.

In approving the evaluation and determining the role classification level, the delegate assigns an approved classification in accordance with the Classification Rules. To avoid a conflict of interest, the decision-maker should not also be the assessor responsible for undertaking the evaluation.

Last reviewed: 
29 May 2018