Appendix 3 - Survey methodologies

This appendix details the survey methodologies used for the State of the Service agency survey (agency survey) and 2013 Australian Public Service (APS) employee census. It also includes an outline of the analysis approach taken in relation to the qualitative and quantitative data collected.

Agency survey methodology

The scope of the agency survey was the 103 APS agencies, or semi-autonomous parts of agencies, employing at least 20 staff under the Public Service Act 1999.

Agencies were provided with access to the online survey between 11 June and 19 July 2013.
As part of the process, agency heads were required to sign off their agency's response. All 103 agencies completed the survey, although 24 agencies with fewer than 100 employees completed a shortened version. The Australian Public Service Commission (the Commission) used this survey as a key source of information for this report.

Data cleaning

Agency survey data was rigorously examined for errors and inconsistencies by ORIMA Research and the Commission before being analysed. Where errors were subsequently discovered, corrections were made and all relevant analyses reproduced to ensure the accuracy of the results in this report.

APS employee census methodology

Similar to 2012, the 2013 APS employee census (employee census) was administered to all available APS employees. This census approach provides a comprehensive view of the APS and ensures no eligible respondents are omitted from the survey sample, removing sampling bias and reducing sample error.

Employee census design

Employee census content was designed to measure key issues such as employee engagement, leadership, health and wellbeing, job satisfaction and general impressions of the APS. The employee surveys conducted in previous years, along with the 2012 APS employee census were used as the basis for this year's employee census. Some questions are included every year while others are included on a two or three-year cycle. Some were included for the first time to address topical issues. To ensure the Commission maintains longitudinal data, changes to questions used in previous years are kept to a minimum.

Also included in the employee census were a number of internationally benchmarked items that allowed the APS to be compared to similar organisations; for example, the United Kingdom Civil Service Health and Safety Executive (HSE) First Pass Tool, which examines employee health and wellbeing.1

The draft employee census was pilot tested with APS 1–6 and/or Executive Level employees from the following agencies:

  • Department of Education, Employment and Workplace Relations
  • Department of Human Services
  • Australian Charities and Not-for-profit Commission
  • Social Security Appeals Tribunal
  • Australian Taxation Office
  • Australian Institute of Family Studies
  • Fair Work Commission
  • Productivity Commission
  • Australian Public Service Commission.

Feedback was provided to the Commission for consideration before the employee census
was deployed.

Employee census delivery

The employee census was delivered using the following methods:

  • Online, through a unique link provided to each employee via email by ORC International.
  • Telephone surveys were carried out for a number of employees working in remote locations.
  • Paper-based surveys were used for employees who did not have access to an individual email account or did not have (or had only limited) access to the internet. Employees received a letter from their agency inviting them to participate along with a paper copy of the survey to complete and return to ORC International.

Sampling and coverage

The employee census covered all employees (ongoing and non-ongoing) from all APS agencies, regardless of size or location. The employee census population consisted of all APS employees recorded in the Australian Public Service Employment Database (APSED) on 9 April 2013. The email addresses held in APSED were confirmed by agencies or, in some cases agencies provided the email addresses for their employees.

The employee census invitations were sent to employees from 15 May 2013. The number of invitations increased as new unrecorded employees were added and incorrect email addresses were corrected. The initial deadline for survey completion was 7 June 2013, although this was extended to 14 June 2013.

The final employee census sample was reduced to 158,358 from an initial number of 161,359. The adjustment was to exclude employees with invalid email addresses, casual and intermittent employees not in the workplace and those out of office for the entire survey period. Overall, 102,219 employees responded to the employee census, a response rate of 66%. This is higher than 2012, when 87,214 employees responded to the 2012 APS employee census, a response rate of 55%.

Sources of bias

The employee census methodology removed sampling bias and minimised sample error by ensuring that all APS employees were invited to take part. However, some employees who had recently entered the APS were not recorded in APSED at the time the invitations were sent out. Omitting these employees or others who had changed agency recently, may have introduced some sampling error. This risk was managed by giving agencies the opportunity to review or provide their own email lists and by encouraging all employees to watch out for their invitation and to contact ORC International if they did not receive one. Over the course of the survey, 1,245 additional employees were added to the population, reducing the possibility of sampling error as much as possible.

Non-sampling bias was controlled in part by independently reviewing and testing all items before the employee census was administered. Online administration of the survey records the respondent's answers directly, minimising data entry errors and addressing another source of potential bias.

A potentially large source of non-sampling bias was that not all invitees took part. Overall, 54,500 or 34% of invitees did not complete the census. In addition, 1,717 were unable to complete the survey because they were on leave during the survey period. If key groups systematically opted out of the census, this could be a source of non-sampling bias. To test this, the survey sample was compared against the overall APS population on gender, classification, location and employment category (ongoing or non-ongoing). Analyses showed there were only minor differences between the employee census respondents and the APS as a whole.2

Privacy, anonymity and confidentiality

Maintaining confidentiality throughout the employee census process was of primary concern to the Commission. To ensure confidentiality, each APS employee was provided with a unique link to the survey via email. Only a small number of staff at ORC International had access to both individual email addresses and their responses. All responses provided to the Commission by ORC International were de-identified. Due to these precautions, Commission staff could not identify individual respondents to the survey or identify those who had not taken part.

Including agencies with less than 100 employees creates an additional privacy risk. Breaking down small workforces into even smaller groups risks participants' anonymity by inadvertently ‘singling out’ easily distinguished employees to their colleagues, for example, the female SES employees in a small agency. Even where there are several such employees, it is possible to attribute responses to specific individuals by guessing, either correctly or incorrectly. Besides breaking anonymity, identifying personal information such as carer responsibilities is a breach of privacy. Furthermore, knowledge of attitudes towards certain issues, such as leaders or colleagues, could be used against the employee.

This risk was managed by not reporting to agencies or in this report, any segmentation which would have resulted in groups of less than 10 responses. In addition, agencies were not supplied with any raw comments provided by respondents due to similar risks to anonymity. On request agencies were supplied with the text analyses of comments on selected items where there was sufficient volume of comment to ensure anonymity.

Data cleaning

Employee census data was rigorously examined for errors and inconsistencies by ORC International and the Commission before being analysed. Where errors were subsequently discovered, corrections were made and all relevant analyses reproduced to ensure the accuracy of the results in this report.

Precision of estimates

Even with a 66% response rate, the figures discussed in this report are estimates of true population values. The precision of these estimates are influenced by the amount of data available. A common measure of precision is the margin of error, expressed as a confidence interval around the estimate. This interval gives a range in which the true value of the population is likely to fall. When 95% confidence is referred to, it is accepted that there is a 5% chance the responding sample will result in an estimate for the true population value that falls outside the 95% confidence interval constructed.

For example, a 95% margin of error for the true percentage of the population who agree that employees in their agency appropriately assess risk is between 58.5% and 59.1% (a sample estimate of 58.8% with margin of ±0.3 percentage points). Table A3.1 shows the 95% margins of error of several survey items. In each case, the true population value is less than half a per cent above or below the estimate.

Table A3.1 Margins of error for employee census results, 2012–13
Question 95% margin of error (percentage points) Estimated result (%)
Source: Employee census
Agree that employees in their agency appropriately assess risk ±0.3 58.8
Agree that leadership is of a high quality ±0.3 46.3
Agree their job requires them to utilise a variety of different skills ±0.3 76.5
Agree their supervisor is committed to workplace safety ±0.2 82.7
Agree their agency motivates them to help achieve its objectives ±0.3 46.7

The large sample size of the employee census allows very narrow margins of error and precise estimates. When the data is segmented into groups, the width of the margins will increase as the sample sizes decrease. For smaller groups, such as Indigenous employees (2,630 respondents), the precision may drop substantially (Table A3.2). However, the Commission is 95% confident that the true proportion of Indigenous employees who have confidence in their agencies' risk assessment practices was between 62.2% and 65.9%, or approximately 64.0%.

Table A3.2 Margins of error for employee census item ‘In general, employees in my agency appropriately assess risk’, 2012–13
Demographic group 95% margin of error (percentage points) Estimated result (%)
Source: Employee census
Women ±0.5 58.6
Men ±0.4 59.0
People with disability ±1.2 52.5
People without disability ±0.3 59.3
Indigenous employees ±1.9 64.0
Non-Indigenous employees ±0.3 58.7

Analysis strategy

This State of the Service report draws on both quantitative and qualitative data.

Quantitative data

Interpretation of items and scales

Most items in the employee census asked the respondent to rate the level of importance, satisfaction with or effectiveness of workplace issues on a five-point, ordinal scale. The scales were generally balanced, allowing respondents to express one of two extremes of view (for example, satisfaction and dissatisfaction) and with a midpoint that allowed respondents to enter a ‘neutral’ response. For this report, the five points have generally been collapsed into three: agree/satisfied, neutral, and disagree/dissatisfied. Figures reported are the proportion of respondents who responded with either strongly agree/very satisfied or agree/satisfied, except where noted.

When interpreting item responses, it is important to realise there is an ordinal relationship between points in a scale. The strength of opinion to shift a respondent from ‘neutral’ to ‘satisfied’ may be much smaller than the strength required to shift a respondent from ‘satisfied’ to ‘very satisfied’.

Where scale scores are reported, such as the APS Employee Engagement Model scores, the five-point item responses were combined and re-scaled to produce a continuous scale score ranging from one to 10. Scores from scales with demonstrated validity and reliability are generally more robust than item-based analyses as they triangulate information from a number of items examining a single issue. They also allow the use of more sophisticated statistical analyses. The employee census is likely to make greater use of scales in future years.

Data analysis

As the agency survey has a 100% response rate, the data is not subject to sampling error. Statistical significance testing is unnecessary. Results are reported as either raw numbers or percentages.

While the employee census was offered to all APS employees, a response rate of 66% means that inferential statistics are still required to analyse the data. The analysis of this data has historically used traditional social science techniques, such as χ2 tests. Conventional guidelines have been used for determining statistical significance (p<0.05).

Statistical significance speaks to the probability that two groups have been randomly selected from the same population. If the probability is sufficiently low it is concluded that the groups are drawn from different populations. These groups are described as significantly different. However, statistical significance does not reflect the magnitude of the difference between groups, also called the effect size.

As sample sizes increase, the effect size required to achieve statistical significance decreases. Put another way, even the smallest of differences will be statistically significant if the sample size is large enough. With a sample of 102,219 respondents, effects which are far too small to have any appreciable meaning for the APS will almost certainly be statistically significant.

To avoid providing misleading information by over-emphasising statistically significant differences, only those results which are greater than small in magnitude have been reported. The magnitude was calculated using commonly-used measures appropriate to the specific analyses being performed (Table A3.3).

Table A3.3: Measures of effect size
Analysis Effect size statistics3 Minimum effect size to be reported
z-test of difference in proportions Cohen's h ≤0.2
ANOVA/t-test Cohen's f ≤0.1
Cohen's d ≤0.2

Agency clustering

Functional clusters were introduced in the State of the Service Report 2011–12 to allow comparisons to be made between agencies with similar primary functions. Agencies were originally categorised based on the information they provided in the 2010–11 State of the Service agency survey. Due to the difficulty of assigning agencies with varied roles to a single cluster, categories were subjected to review by the Commission and agency stakeholders before being finalised for 2012–13. Functional clusters will be reviewed over time to ensure they identify the most appropriate benchmarking measures available for agencies. See Appendix 2 for information on individual agencies.

The final functional clusters, based on those used in the United Kingdom Civil Service People Survey, are:

  • Policy: organisations involved in the development of public policy
  • Smaller operational: organisations with less than 1,000 employees involved in the implementation of public policy
  • Larger operational: organisations with 1,000 employees or more involved in the implementation of public policy
  • Regulatory: organisations involved in regulation and inspection
  • Specialist: organisations providing specialist support to government, businesses and the public.

Qualitative data

The employee census provided specified response options for most questions. Complementing these, several items were completely open-ended, asking the individual to provide a short, written response to a question or statement. Open-ended responses were used to complement information gained through quantitative methods. Not all respondents provided a response to an open-ended question and comments do not necessarily represent the views of all respondents. However, comments represent a rich data source.

Data analysis

Open-ended comment analysis was based on the grounded theory approach in which key concepts from the collected data were coded either manually or with text mining software such as Leximancer. Comments were reported using themes and concepts rather than individual responses.


1 Health and Safety Executive, Work Related Stress–Research and Statistics,;
R Kerr, M McHughand M McCrory, ‘HSE management standards and stress-related work outcomes', Occupational Medicine, (2009),
vol. 59, no. 8, pp. 574–579.

2 Results may be requested by emailing:

3 J Cohen, Statistical Power Analysis for the Behavioral Sciences, Psychology Press, New York, (2009).