Go to top of page

Appendix 3 - Survey methodologies

Agency survey methodology

The scope of the agency survey was the 101 Australian Public Service (APS) agencies, or semi-autonomous parts of agencies, employing at least 20 staff under the Public Service Act 1999.

Agencies were provided with access to the online survey between 31 May and 13 July 2012. As part of the process, agency heads had to sign off their agency's response. All 101 agencies completed the survey, although 23 agencies with fewer than 100 employees completed a shortened version.1 The Australian Public Service Commission (the Commission) used this survey as a key source of information for this report.

Back to top

Employee census methodology

In 2012, the Commission moved from the sample survey methodology used in previous years to a census model.2 This involved inviting all current APS employees to fill out the survey (referred to as the employee census). The advantages of the census model was that it included employees from all agencies, provided a comprehensive view of the APS and ensured no eligible respondents were omitted from the survey sample, which removed sampling bias and reduced sample error.

Census design

Census content was designed to measure key issues such as staff engagement, leadership, health and wellbeing, job satisfaction and general impressions of the APS. The employee surveys conducted in previous years were used as the basis for this year's census. Some questions are included every year while others are included on a two or three-year cycle. Some were included for the first time to address topical issues. To ensure the Commission maintains longitudinal data, changes to questions used in previous years are kept to a minimum.

Also included in the employee census were a number of internationally benchmarked items that allowed the APS to be compared to similar organisations; for example, the United Kingdom Civil Service Health and Safety Executive (HSE) First Pass Tool, which examines employee health and wellbeing.3

The draft employee census was pilot tested with APS 1–6, EL and SES staff from the following agencies:

  • Office of the Australian Building and Construction Commissioner
  • Department of Human Services
  • Department of Agriculture, Fisheries and Forestry
  • Australian Securities and Investment Commission
  • Australian Taxation Office
  • Australian Public Service Commission.

Feedback was provided to the Commission for consideration before the employee census was deployed.

Census delivery

The employee census was delivered using the following methods:

  • Online, through a password-protected internet site. Employees were sent an email from ORC International on behalf of the Commission inviting them to participate in the online survey.
  • Telephone surveys were carried out for a number of employees working in remote locations without internet access.
  • Paper-based surveys were used for employees who did not have access to an individual email account or did not have (or had only limited) access to the internet. Employees received a letter from the Commission inviting them to participate and a paper copy of the survey to complete and return to ORC International.

Sampling and coverage

The employee census covered all employees (ongoing and non-ongoing) from all APS agencies, regardless of size or location. This was a major increase in coverage from 2011 when 17,865 staff from agencies with more than 200 employees were invited to take part.

The census population consisted of all APS employees recorded in the Australian Public Service Employment Database (APSED) on 4 May 2012, when the indicated headcount of the APS was 169,567.

The 170,779 invitations were sent to employees from 8 May 2012. The number of invitations was larger than the initial APS headcount as new unrecorded employees were added and incorrect email addresses were corrected. The initial deadline for survey completion was 1 June 2012, although this was extended to 6 June 2012.

The final census sample was reduced to 159,917. See Appendix 2 for further detail. The adjustment was to exclude employees with invalid email addresses, casual and intermittent employees not in the workplace and those out of office for the entire survey period. Overall, 87,214 employees responded to the employee survey, a response rate of 55%.

Sources of bias

Moving to a census removed sampling bias and minimised sample error by ensuring that all APS employees were invited to take part. However, some employees who had recently entered the APS were not recorded in APSED at the time the invitations were sent out. Omitting these employees, or others who had changed agency recently, may have introduced some sampling error. This risk was managed by encouraging all employees to watch out for their invitation and to contact ORC International if they did not receive one. Over the course of the survey, 883 additional employees were added to the population, reducing the likelihood of sampling error as much as possible.

Non-sampling bias was controlled in part by independently reviewing and testing all items before the census was administered. Online administration of the survey records the respondent's answers directly, minimising data entry errors and addressing another source of potential bias.

A potentially large source of non-sampling bias was that not all invitees took part. Overall, 72,247 or 45% of invitees did not complete the census. Overall, 1,996 were unable to complete the survey because they were on leave during the survey period. If key groups systematically opted out of the census, this could be a source of non-sampling bias. To test this, the survey sample was compared against the overall APS population on gender, classification, location and employment category (ongoing or non-ongoing). Analysis showed there were only minor differences between the employee census respondents and the APS as a whole.4

Privacy, anonymity and confidentiality

Maintaining confidentiality throughout the employee survey process was of primary concern to the Commission.

To ensure confidentiality, each APS employee was provided with a unique password to prevent multiple responses from individuals. Only a small number of staff at ORC International had access to both individual names and their unique passwords. All responses provided to the Commission by ORC International were de-identified. Due to these precautions, Commission staff could not identify individual respondents to the survey or identify those who had not taken part.

Including agencies with less than 100 employees created an additional privacy risk this year. Breaking down small workforces into even smaller groups risks participants' anonymity by inadvertently ‘singling out’ easily distinguished employees to their colleagues, for example, the female SES employees in a small agency. Even where there are several such employees, it is possible to attribute reponses to specific individuals by guessing, either correctly or incorrectly. Besides breaking anonymity, identifying personal information such as carer responsibilities is a breach of privacy. Furthermore, knowledge of attitudes towards certain issues, such as leaders or colleagues, could be used against the employee. This risk was managed by not reporting to agencies or in this report, any segmentation which would have resulted in groups smaller than 10 individuals. Furthermore, agencies were not supplied with any raw comments provided by respondents due to similar risks to anonymity. Agencies were supplied with the text analyses of comments on selected items where there was sufficient volume of comment to ensure anonymity.

Data cleaning

Employee census and agency survey data was rigorously examined for errors and inconsistencies by ORC International before it was provided to the Commission for analysis. Where errors were subsequently discovered, corrections were made and all relevant analyses reproduced to ensure the accuracy of the results in this report.

Precision of estimates

With only a 55% response rate, the figures discussed in this report are estimates of true population values. The precision of these estimates is influenced by the amount of data available. A common measure of precision is the margin of error, expressed as a confidence interval around the estimate. This interval gives a range in which the true value of the population is likely to fall. When 95% confidence is referred to, it is accepted that there is a 5% chance the responding sample will result in an estimate for the true population value that falls outside the 95% confidence interval constructed.

For example, a 95% margin of error for the true percentage of the population who agree that employees in their agency appropriately assess risk is between 59.6% and 60.2% (a sample estimate of 59.9% with a margin of ± 0.3 percentage points). Table A3.1 shows the 95% margins of error of several survey items. In each case, the true population value is less than half a per cent above or below the estimate. 

Table A3.1 Margins of error for employee census results, 2011–12
Question 95% margin of error (percentage points) Estimate result (%)
Source: Employee census
Agree that employees in their agency appropriately assess risk ±0.3 59.9
Agree their agency has sound governance processes for effective decision making ±0.3 51.3
Agree their agency's leadership is of a high quality ±0.3 47.7
Agree their agency operates with a high level of integrity ±0.3 64.1
Agree their input is adequately sought and considered about decisions that directly affect them ±0.4 49.9

The large sample size of the census allows very narrow margins of error and precise estimates. When the data is segmented into groups, the width of the margins will increase as the sample sizes decrease. For smaller groups, such as Indigenous employees (2,130 respondents), the precision may drop substantially (Table A3.2). However, the Commission is 95% confident that the true proportion of Indigenous employees who have confidence in their agencies risk assessment practices was between 62.5% and 66.7%, or approximately 64.6%.

Table A3.2 Margins of error for employee census item 18q ‘In general, employees in my agency appropriately assess risk’, 2011–12
Demographic group 95% margin of error (percentage points) Estimated result (%)
Source: Employee census
Women ±0.5 60.4
Men ±0.5 59.2
People with disability ±1.3 53.5
People without disability ±0.4 60.4
Indigenous employees ±2.1 64.6
Non-Indigenous employees ±0.3 59.8

Back to top

Analysis strategy

This State of the Service report draws on both quantitative and qualitative data.

Quantitative data

Interpretation of items and scales

Most items in the employee census asked the respondent to rate the level of importance, satisfaction with or effectiveness of workplace issues on a five-point, ordinal scale. The scales were generally balanced, allowing respondents to express one of two extremes of view (e.g. satisfaction and dissatisfaction) and with a midpoint that allowed respondents to enter a ‘neutral’ response. For this report, the five points have generally been collapsed into three: agree/satisfied, neutral, and disagree/dissatisfied. Figures reported are the proportion of respondents who responded with either strongly agree/very satisfied or agree/satisfied, except where noted.

When interpreting item responses, it is important to realise there is an ordinal relationship between points in a scale. The strength of opinion to shift a respondent from ‘neutral’ to ‘satisfied’ may be much smaller than the strength required to shift a respondent from ‘satisfied’ to ‘very satisfied’.

Where scale scores are reported, such as the APS Engagement Model scores, the five-point item responses were combined and re-scaled to produce a continuous scale score ranging from one to 10. Scores from scales with demonstrated validity and reliability are generally more robust than are item-based analyses as they triangulate information from a number of items examining a single issue. They also allow the use of more sophisticated statistical analyses. The employee survey is likely to make greater use of scales in future years.

Data analysis

As the agency survey has a 100% response rate, the data is not subject to sampling error. Statistical significance testing is unnecessary. Results are reported as either raw numbers or percentages.

While the employee census was offered to all APS employees, a response rate of 55% means that inferential statistics are still required to analyse the data. The analysis of this data has historically used traditional social science techniques, such as x2 tests. Conventional guidelines have been used for determining statistical significance (p<0.05).

Statistical significance speaks to the probability that two groups have been randomly selected from the same population. If the probability is sufficiently low it is concluded that the groups are drawn from different populations. These groups are described as significantly different. However, statistical significance does not reflect the magnitude of the difference between groups, also called the effect size.

As sample sizes increase, the effect size required to achieve statistical significance decreases. Put another way, even the smallest of differences will be statistically significant if the sample size is large enough. With a sample of 87,214 respondents, effects which are far too small to have any appreciable meaning for the APS will almost certainly be statistically significant.

To avoid providing misleading information by over-emphasising statistically significant differences, results were reported in this State of the Service report in terms of their magnitude. The magnitude was calculated using commonly-used measures appropriate to the specific analyses being performed (Table A3.3).

While these descriptions are intuitive and free of statistical jargon, they differ from that used in previous reports. The terms used in this report have been adapted from the guidelines published in Statistical Power Analysis for the Social Sciences which are widely used in the social sciences.5 Table A3.4 describes these differences in terms of their magnitude as minor, small, medium or large. In this report Cohen's original term ‘trivial’ has been replaced with ‘minor’. Trivial was not considered to be an intuitive term for the audience of this report.

The following example of how this is applied is taken from the 2011 State of the Service employee survey—an employee's satisfaction with their remuneration had a small effect on their intention to stay in their current agency. By contrast, a feeling of strong personal attachment to their agency had a moderate effect on their intention to stay.

Minor effects—those below small in magnitude—are unlikely to be a source of meaningful information or provide grounds for useful workplace interventions. For example, an employee's gender had a minor effect on their intention to stay with their agency.

While Cohen's guidelines are useful and well-known, statistical magnitude does not necessarily indicate real-world importance. Weak effects can be important, and Cohen's guidelines are largely arbitrary. Rigidly applying these standards risks dismissing results that are important, even if the effects are statistically weak in the available data.

For example, Rosnow and Rosenthal cite the case of a study examining whether daily doses of aspirin reduce the likelihood of a patient suffering a heart attack.6 The results showed that the effect was too weak to meet Cohen's guidelines for a small relationship. However, the fact that patients taking aspirin were 3.4% less likely to suffer a potentially fatal heart attack than those taking a placebo suggested the findings were too important to dismiss. Consequently, care should be taken that the importance of results to the APS are interpreted in context and not solely based on arbitrary statistical guidelines.

Table A3.3 Measures of effect size
Analysis Effect size statistic(s) Small effect Medium effect Large effect
x2 Cohen's w ±0.1 ±0.3 ±0.5
ANOVA/t-test Cohen's fh ±0.1 ±0.25 ±0.4
Cohen's d ±0.2 ±0.5 ±0.8
Table A3.4 Reporting of practical significance
  Interpretation Key descriptors Example wording Statistical criteria
Minor A difference which is undetectable without the use of a large scale survey. Invisible in the workplace and lacking any real impact on the APS . Minor, marginal While Group 1 was higher than Group 2, the difference was minor. p >0.05 and/or d<0.2
Small A subtle effect that requires consideration or one which combines with other factors to have a larger impact. Weak, slight Small but salient differences were found between …

Group 1 was slightly higher than Group 2.

This factor has a weak effect on …

There was a weak relationship between …

p <0.05 and

0.2<=d>0.5

Medium A difference has been detected which is strong enough to probably be visible in the workplace. This may provide grounds for an effective intervention. Moderate, medium-sized Group 1 was moderately higher than Group 2.

There was a moderate relationship between …

p <=0.05 and 0.5< d >0.8
Large This is an effect so large it is probably clearly evident in the workplace. Large, strong, considerable Group 1 were considerably higher than Group 2.

There was a strong relationship between …

p <0.05 and

d >=0.8

Longitudinal analyses

The Commission includes certain key items in the employee survey every year to allow longitudinal comparisons to be made. However, the change from a stratified sample to a census may have influenced this year's results for these items. Therefore, any changes between the 2010–11 and 2011–12 employee surveys should be interpreted cautiously.

Agency clustering

To allow comparisons between similar organisations, agencies were categorised based on the size of their workforces and their primary function. The resulting functional clusters, based on those used in the United Kingdom Civil Service People Survey, are:

  • Policy: organisations involved in the development of public policy
  • Smaller operational: organisations with less than 1,000 employees involved in the implementation of public policy
  • Larger operational: organisations with 1,000 employees or more involved in the implementation of public policy
  • Regulatory: organisations involved in regulation and inspection
  • Specialist: organisations providing specialist support to Government, businesses and the public.

Agencies are categorised based on the information they provided in the 2010–11 State of the Service agency survey. Due to the difficulty of assigning agencies with varied roles to a single cluster, categories were subjected to review by the Commission and adjusted where required before being finalised. Functional clusters will be reviewed and improved over time to ensure they identify the most appropriate benchmarking measures available for agencies. See Appendix 2 for information on individual agencies.

Qualitative data

The employee census provided specified response options for most questions. Complementing these, several items were completely open-ended, asking the individual to provide a short, written response to a question or statement. Open-ended responses were used to complement information gained through quantitative methods. Not all respondents provided a response to an open-ended question and comments do not necessarily represent the views of all respondents. However, comments represent a rich data source.

Data analysis

Open-ended comment analysis was based on the grounded theory approach in which key concepts from the collected data were coded either manually or with text mining software such as NVivo or Leximancer. Where there were sufficient numbers, comments were segmented by substantive classification levels (APS 1–6, EL, SES), or by agency, to allow more detailed analyses. Comments were reported using themes and concepts rather than individual responses, except when comments were non-attributable and served to highlight especially salient concepts or themes.

Back to top

1 The shortened version of the agency survey completed by small agencies consisted of sections A, B, C, D, G, H and N.

2 See the State of the Service Report 2010–11, p. 269, for further detail on the sample survey methodology.

3 Health and Safety Executive, Work Related Stress–Research and Statistics, R Kerr, M McHugh and M McCrory, ‘HSE Management Standards and stress-related work outcomes’, Occupational Medicine, vol. 59, no. 8, (2009), pp. 574–579.

4 stateoftheservice [at] apsc.gov.au (Results may be requested by emailing )

5 J Cohen, Statistical Power Analysis for the Behavioral Sciences, Psychology Press, New York, (2009).

6 RL Rosnow and R Rosenthal, ‘Statistical Procedures and the Justification of Knowledge in Psychological Science’, American Psychologist, vol. 44, no. 10, (1989), pp. 1276–1284.

Back to top