Go to top of page

Appendix 2 - Survey Methodologies

Agency Survey Methodology

The scope of the agency survey was the 97 Australian Public Service (APS) agencies, or semi-autonomous parts of agencies, employing at least 20 staff under the Public Service Act 1999.

Agencies were provided with access to the online survey on 2 June 2011 and had six weeks to submit their response. As part of the survey return, agency heads had to ‘sign off’ their agency’s response. All 97 agencies responded to the agency survey. The Australian Public Service Commission (the Commission) relied on these survey results as one key source of information in preparing this report.

Employee Survey Methodology

The employee survey sampling methodology was developed in consultation with the Australian Bureau of Statistics. Content was designed to establish the views of APS employees on issues such as leadership, health and wellbeing, learning and development, job satisfaction, and general impressions about the APS. The Commission relied on these survey results as one key source of information in preparing this report.

Scope and coverage

The scope of the employee survey was all APS employees (ongoing and non-ongoing) in agencies with at least 100 APS employees. Employees in agencies that employed fewer than 100 APS employees were excluded because responses could possibly identify individuals.

The survey sample was drawn from the Australian Public Service Employment Database (APSED) on 7 April 2011, at which time APSED indicated the total number of APS employees was 165,906. The survey sample was selected from the total population of APS employees from agencies with at least 100 APS employees, which numbered 164,832. Appendix 1 provides information on agencies’ APS employee numbers as at 7 April 2011.

Stratification

A stratified random sample of 17,865 APS employees was selected from APSED. The sample was stratified by:

  • level (APS 1–6 [including Trainee and Graduate APS], Executive Level [EL] and Senior Executive Service [SES] classification groups)
  • agency size (small: 100–250 APS employees; medium: 251–1,000 APS employees; and large: >1,000 APS employees)
  • agency (for agencies with at least 200 employees)
  • location (ACT and non-ACT).

To enable sound statistical inferences to be made about all APS employees, individuals were randomly selected from each strata. Each individual in a stratum had an equal chance of being selected.

Sampling rates varied between the strata (level, agency size, agency and location) to ensure accuracy of population estimates against these key characteristics. For example, a much higher sampling rate was required for a smaller population (such as the SES) than for a larger population (such as APS 1–6 employees).

Required accuracies varied between the strata, and this also led to differing sampling rates for these strata.

The stratification process did not bias the population estimates because the responses were weighted to take differing sample rates into account (see also ‘Weighting and estimation’).

Reporting of results from agencies with at least 200 employees

This year the sample size for the employee survey was significantly increased to enable agencies with at least 200 employees to receive a copy of their own results from the employee survey for internal management purposes. For this to occur, 58 agencies were included separately in the stratification process (see ‘Stratification’).

Privacy, anonymity and confidentiality

Maintaining confidentiality throughout the employee survey process was of primary concern to the Commission.

Privacy arrangements precluded Commission staff—other than those in the APSED team, the Group Manager of the Human Capital Research and Evaluation Group and the Commission’s Executive—from accessing APSED data relating to individuals. This meant the identity of individuals selected in the sample from APSED was not available to the Commission’s State of the Service team or other non-APSED staff involved in the survey. A small number of ORIMA Research staff and Lighthouse Data Collection staff had access to the sample. All responses were anonymous so individuals could not be identified.

Each person invited to participate in the employee survey was provided with a unique password to prevent multiple responses from individual respondents.

Survey design

The employee surveys conducted in previous years were used as the basis for this year’s survey. Some questions are included every year, others are cycled through on a two or three-year basis, and others were included for the first time to address topical issues. To ensure the Commission maintains comparable time series data, changes to questions used in previous years were kept to a minimum.

The draft employee survey was subjected to individual and paired pilot testing involving APS 1–6, EL and SES classifications from the Australian Electoral Commission, Department of Immigration and Citizenship, Department of Innovation, Industry, Science and Research, Department of Defence and Department of Finance and Deregulation.

The employee survey was delivered using three methods.

The main delivery method was online through a password-protected internet site. Most employees in the sample were sent an email from ORIMA Research on behalf of the Commissioner inviting them to participate in the online survey.

The second delivery method was paper-based. This was used for employees working in agencies who do not have access to an individual email account or do not have (or have only limited) access to the internet. These employees received a letter from the Commissioner inviting them to participate in the survey and a paper copy of the survey to complete and return to ORIMA Research.

The third delivery method was by telephone. This was trialled this year for 118 Aboriginal Hostels Limited employees working in remote locations without access to computers or the internet. These employees received a letter from their agency inviting them to participate and providing them with information on the survey. They were then surveyed by telephone by Lighthouse Data Collection.

The 17,865 invitation emails and letters were sent to employees in the sample on 9 May 2011. The deadline for survey completion was 3 June 2011.

The final sample was reduced by 539 to 17,326. The adjustment was made to account for those who ended up being excluded from the survey (e.g. through repeatedly bounced emails, returned paper copies, or ‘out of office’ for the entire survey period).

Weighting and estimation

Survey responses were weighted to reflect the characteristics of the population of APS employees. This ensured that the demographic characteristics (used for sample selection) of the survey results matched those of all APS employees. The weighting process was based on the four demographic characteristics used for selection of the sample, namely:

  • level (APS 1–6 [including Trainee and Graduate APS], EL and SES classification groups)
  • agency size (small: 100–250 APS employees; medium: 251–1,000 APS employees; and large: >1,000 APS employees)
  • agency (for agencies with at least 200 employees)
  • location (ACT and non-ACT).

Around 350 weights were applied—level (3), multiplied by location (2), multiplied by agency size and agency (59). For this survey, the weight for each stratum (for example, ACT-based EL staff in a particular large agency) was calculated by dividing the population share of that stratum by the proportion of survey respondents in that stratum. For example, if 1% of APS employees, within the scope of the survey, were ACT-based EL staff working for a particular large agency, and 2% of all survey respondents were ACT-based EL staff within that agency, then the applied weight would be 0.5. If the data were not weighted, some strata would be over-represented and others under-represented in total survey results.

The weighting approach was based on that used in previous years. Application of a uniform approach to sample selection and weighting continued to assist in the development of time series data. The weighting approach assumed that respondents respond in the same way as non-respondents for the characteristics of interest: that is, responding persons represent the non-responding persons.

In this survey, with a response rate of 59%, there would need to be a marked difference in the views of non-respondents from those of the respondents to alter or bias the overall results to a significant extent. This report’s analysis therefore assumes there is no significant bias between those who responded to the survey and those who did not respond. This should be considered when using the data to make inferences about the APS population.

Results have generally been presented rounded to the nearest whole percentage point (that is, 38% not 37.7%). Due to this rounding, the percentage results for some questions may not add up to exactly 100%.

Measures of error and accuracy

Two types of error can occur in sample surveys: non-sampling error and sampling error. Non-sampling error causes bias in statistical results and can occur at any stage of a survey or census (when every member of the target population is included). Sampling error arises because not every member of the population is surveyed in a sample survey. A measured sample statistic, in other words, is not usually identical to true population behaviour. Estimating non-sampling error can be difficult, whereas sampling error can be estimated mathematically. It is important to be aware of these errors—in particular, non-sampling error—and to aim to minimise or eliminate them from the survey.

Non-sampling error

This year’s employee survey achieved a response rate of 59%. This response rate excludes responses that were received but were insufficiently complete to provide input into the final data. This response rate is creditable for a voluntary survey.

Non-sampling errors can result from imperfections in reporting by respondents, errors made in recording and coding of responses, and errors made in processing data. No quantifiable estimates are available on the effect of non-sampling errors. However, every effort has been made to minimise the non-sampling errors by careful survey design and efficient implementation. In particular, the online survey design minimised the possibility of errors being made in the recording and coding of responses, as the respondents themselves entered the data when responding.

In addition, identifiable errors respondents made while completing the survey were removed from the results database. Blank responses were generally coded to non-response categories. The exception to this practice arose where responses were needed for demographic items for weighting purposes. Where this occurred, survey responses were disregarded.

Sampling error

One measure of the sampling error of a population estimate is the standard error. There are about 19 chances in 20 that a sample estimate will be within two standard errors of the true population value. This is known as the 95% confidence interval.

The Commission is 95% confident, for instance, that the true percentage of the population who agree that their supervisor encourages them to build the capabilities and/or skills required for new job roles is between 67.0% and 68.8% (a sample estimate of 67.9% and a confidence interval of ±0.90 percentage points, based on a standard error of 0.45 percentage points).

The following table illustrates the confidence intervals from the sample design associated with estimates from some key questions in the employee survey.

Table A2.1: Confidence intervals for APS employee survey results, 2010–11
Question 95% confidence interval (percentage points) Estimate result (%)
Agree that their supervisor encourages them to build the capabilities and/or skills required for new job roles ±0.90 67.9
Agree that their agency has sound governance processes for effective decision making ±0.96 56.7
Agree that in their agency, the leadership is of a high quality ±0.95 45.3
Agree that their agency operates with a high-level of integrity ±0.88 71.2
Agree that their input is adequately sought and considered about decisions that directly affect them ±0.98 51.9
Considering their work and life priorities, are satisfied with the work-life balance in their current job ±0.88 70.5
Would recommend their current agency as a good place to work ±0.93 63.8
Are always looking for better ways to do things ±0.71 88.3
Are satisfied with their own access to learning and development opportunities in their agency ±0.97 58.1

Results have not been reported for questions where the number of unweighted responses was fewer than 30, for two reasons: to eliminate the possible identification of individuals who responded to these questions and to remove less reliable results from the analysis. Results with a confidence interval of more than ± 15 percentage points have also been excluded from the analysis. This approach has not affected reporting of results at the aggregate level; however, it has limited the ability to report on disaggregated data where the sample size is small—as is sometimes the case for questions following filter questions.

Estimates relating to disaggregated data, where the sample size is small, will have wider confidence intervals than estimates for aggregated data, or disaggregated data where the sample size is large. For example, the following table illustrates that the confidence interval for Indigenous employees is wider than the confidence intervals for other employees responding to the same question, because the Indigenous population is small.

Table A2.2: Confidence intervals for employee survey results for demographic groups, 2010–11
Question 95% confidence interval (percentage points) Estimate result (%)
Agree that their supervisor encourages them to build the capabilities and/or skills required for new job roles (women) ±1.2 68.4
Agree that their supervisor encourages them to build the capabilities and/or skills required for new job roles (men) ±1.3 67.2
Agree that their supervisor encourages them to build the capabilities and/or skills required for new job roles (people with disability) ±3.9 57.5
Agree that their supervisor encourages them to build the capabilities and/or skills required for new job roles (people without disability) ±0.9 68.6
Agree that their supervisor encourages them to build the capabilities and/or skills required for new job roles (Indigenous employees) ±5.0 67.9
Agree that their supervisor encourages them to build the capabilities and/or skills required for new job roles (non-Indigenous employees) ±0.9 67.9

Interpretation of scales

Scales were included in any question requiring a respondent to measure the strength or level of an attitude or opinion. In its simplest form, respondents were asked to rate the level of importance, satisfaction or effectiveness for various workplace variables on a five-point scale.

The scales were generally balanced, allowing respondents to express one of two extremes of view (for example, satisfaction and dissatisfaction) and with a midpoint that allowed respondents to enter a ‘neutral’ response.

When interpreting scales, it is important to realise there is not an ordinal relationship between points in a scale, that is, the strength of opinion to shift a respondent from ‘neutral’ to ‘satisfied’ may be much smaller than the strength required to shift a respondent from ‘satisfied’ to ‘very satisfied’.

Open-ended responses

The employee survey provided specified response options for most questions. It also included open-ended response options for some questions, enabling respondents to provide a text response to a question. Open-ended options were commonly provided, for example, as part of a specified response question in the form of ‘other (please specify)’.

Coding

Some open-ended responses were coded to aid analysis. Coding involved, for example, removing irrelevant and incidental comments from statistical outputs.

Interpretation

The report draws on actual comments employees provided through the open-ended questions to complement other information. Employees’ comments represent a rich and valuable data source; however, they do not necessarily represent the views of all employees.

Data cleaning

Every effort was made to ensure the integrity of data from the employee and agency surveys. Where inaccuracies were discovered, or a different methodology adopted, historical data was revised. For this reason, caution should be exercised when comparing data in this year’s report with that in previous reports. Time series analysis in this report incorporates the historical revisions made to previous datasets