Information in this section is adapted from the SOSR 2011–12. Further detail on these issues can be found in Appendix 3 of the SOSR 2011–12 (pp. 265–275).
Employee census methodology
The census population consisted of all APS employees (ongoing and non-ongoing), as recorded in the Australian Public Service Employment Database (APSED) on 4 May 201224. A total of 87,214 employees took part, a response rate of 55%.
To ensure confidentiality, each employee was provided with a unique password to prevent multiple responses from individuals. Only a small number of staff of the external survey provider, ORC International, had access to both individual names and their unique passwords. All responses provided to the Commission by ORC International were de-identified. Due to these precautions, APSC staff could not identify individual respondents to the census or identify those who had not taken part.
Agency survey methodology
A key source of data for the SOSR is the agency survey, which was completed by 101 APS agencies (or semi-autonomous parts of agencies) employing at least 20 staff under the Public Service Act 1999. Nine agencies had fewer than 20 APS employees. Employees from these agencies complete a shortened version of the survey consisting of sections A, B, C, D, G, H and N.
This draws on both quantitative and qualitative data.
A total of 87,214 employees completed the employee census. With sample sizes this large, it is likely that even very small differences will be statistically significant. This is because as the sample sizes increase, the size of the difference required for a significance test to return a positive result decreases.
In order to avoid presenting misleading information by over-emphasising statistically significant but very small differences, an additional criterion has been considered: the size of the difference (the effect size).
Where a difference is statistically significant and greater than small in magnitude, it has been flagged in the text. Differences smaller than this are unlikely to be visible in the workplace or to form a sound evidence basis for pragmatic recommendations. These are generally not reported unless they reinforce other findings with larger effect sizes. Table 1 shows the statistical criteria used to determine small effect size. These guidelines have been published in Statistical Power Analysis for the Behavioural Sciences25 and are widely in the used the social sciences.
|Effect size statistic used||Cut-off for small effect size|
|Comparison of independent proportions||Cohen’s h||h>=0.20|
|Comparison of two independent means (t-test)||Cohen’s d||d>=0.20|
|Comparison of three or more independent means (ANOVA)||Cohen’s f||f>=0.10|
Where possible, results from the 2012 employee census have been compared with the 2009 Indigenous census, however several issues covered in some detail in the 2009 Indigenous census were not addressed in the employee census. This includes employment history and factors which hinder Indigenous employees in seeking higher positions. Conversely, the APS Engagement and Employee Value Proposition models were developed after the 2009 Indigenous census. As such, comparisons cannot be made. Other issues such as job satisfaction, satisfaction with learning and development and performance management are covered in both and comparisons have been made where appropriate.
The employee census provided specified response options for most questions. Complementing these, several items were completely open-ended, asking the individual to provide a short, written response to a question or statement. Open-ended responses were used to complement information gained through quantitative methods. Not all respondents provided a response to an open-ended question and comments do not necessarily represent the views of all respondents. However, comments represent a rich data source.
24 Over the survey period, employees who were recorded in agency HR systems but had not yet been recorded in APSED were invited to take part in the census as they were identified.
25 J. Cohen, Statistical Power Analysis for the Behavioral Sciences, Psychology Press, New York, 1988.