Literature DB >> 28713568

Gauging Change in Australian Aid: Stakeholder Perceptions of the Government Aid Program.

Terence Wood1, Camilla Burkot1, Stephen Howes1.   

Abstract

In this article, we use data from the 2013 and 2015 Australian Aid Stakeholder Surveys to gauge the extent of the changes to the Australian Government Aid Program since the 2013 federal election. The two surveys targeted the same set of stakeholders of the aid program, and both gathered data on a wide range of aspects of its functioning. As we assess the findings that emerged from the surveys, we situate our work amongst recent academic studies that have looked at the post-2013 aid changes in Australia. Our key findings are that the post-2013 changes to Australian aid have had wide-ranging impacts and have led to deteriorating overall aid quality. However, changes have not affected all aspects of the aid program equally, and some changes are starting to be reversed. In discussion, we examine what these developments mean for the future of Australian aid.

Entities:  

Keywords:  Australia; Australian politics; DFAT; aid policy; foreign aid

Year:  2017        PMID: 28713568      PMCID: PMC5488629          DOI: 10.1002/app5.173

Source DB:  PubMed          Journal:  Asia Pac Policy Stud


Introduction

On the surface, at least, the change of government in 2013 brought dramatic changes to Australian government aid policy. The aid budget was cut repeatedly (Howes 2015). AusAID, the Australian Government's aid agency, was fully integrated into the Department of Foreign Affairs and Trade (DFAT; Davies 2013). And government policy statements proclaimed the arrival of a ‘New Aid Paradigm’, an increased focus on economic diplomacy, and a new push for innovation in the aid program (Bishop 2014a; Bishop 2014b). At the time, these changes were controversial, generating media commentary and critique. To date, however, there has been little systematic academic work on the changes and their impact. Moreover, that work that does exist has offered very different takes on the post‐2013 changes. While some work has taken it as a given that the changes were substantial, other work has downplayed their significance. In this article, we introduce a new, publicly available dataset that serves as a gauge, not only of the quality of Australian aid as a whole but also of the quality of specific aspects of Australian aid policy. The data come from the systematic surveying of Australian aid stakeholders undertaken both in 2013 and 2015. (The surveys are referred to hereafter as the Australian Aid Stakeholder Surveys.) Because the 2013 data were gathered prior to the change in government and subsequent changes in the aid program, while the 2015 data were gathered far enough after the major changes for their effects to have been felt, the data allow insights into the types and magnitudes of change since 2013. Ultimately, our data‐gathering project is intended to be ongoing, and its ultimate utility will not just be in describing the 2013 changes. However, in this article, it is these changes that we focus on, and we describe what Stakeholder Survey data suggest has occurred to Australian aid since 2013. As we do this, we examine how well our data fit with the descriptions of change conveyed by existing studies. The article is structured as follows. In the literature review, we first provide a very brief high‐level summation of the post‐2013 changes to Australian aid before discussing the three existing academic papers that have offered views on the post‐2013 changes. We then look at existing international work focused on systematically studying aid quality. As we do this, we look at the strengths and weaknesses of the methods used in this international work. Following the literature review, we provide full details of how we gathered the Stakeholder Survey data, describe what the Stakeholder Survey data cover and discuss the respective strengths and limitations of the data. In the subsequent results section of our article, we then detail change and continuity as captured by the Stakeholder Surveys. In the discussion we compare our findings to the claims made in existing academic work on the Australian changes. We also discuss what the existing state of affairs suggests about the future of Australian government aid policy.

Literature Review

Although, as we will see, the three existing academic papers that discuss recent changes to the Australian Government Aid Program take different views on the significance of those changes, some basic facts are beyond dispute. Eleven days after the 2013 election, the Coalition government announced that AusAID (at that time the Australian government's aid agency) would be fully integrated into DFAT. Prior to this decision, Australia had a specialist aid agency in some form or another since 1973 (Davies 2013; Davies & Betteridge 2013). A series of cuts to the Australian aid budget also commenced; the first occurring midway through the 2013 parliamentary term. At the beginning of the 2015–16 financial year, cuts of approximately 20 per cent were made to the aid budget (Hockey & Cormann 2014). In both absolute and percentage terms, these were the largest ever cuts to Australian aid (Howes & Pryke 2014). These cuts were followed by additional smaller cuts at the beginning of the following financial year (Howes 2015). Alongside these cuts and changes to the structure of the Australian Government Aid Program, the Australian Foreign Minister, Julie Bishop, stated that she was instigating a ‘New Aid Paradigm’ that would reorient the aid program's work with a focus on ‘Economic Diplomacy’, which appeared to mean focusing more aid on advancing Australia's interest and on economic development. The minister also stated that the ‘New Aid Paradigm’ would bring with it more ‘innovative’ aid (Bishop 2014a; Bishop 2014b). These matters of fact are uncontested in recent academic work. What is contested is the significance of the changes that have occurred, with the three academic studies that have covered the post‐2013 period offering different takes. Of the three papers, that of Day (2016) most clearly signals a belief that changes have been significant. Day's paper focuses more on why changes occurred rather than their magnitude, but it is clear he believes the changes were significant. He refers to the changes as ‘dramatic’ (2016, p. 3) and calls the period in which they occurred ‘tumultuous’ (2016, p. 3). To Day, at least some of this tumult simply stems from the budget cuts and integration. However, beyond budget cuts and structural changes, Day also notes a loss of aid expertise, changes in organisational ethos and increased uncertainty as factors that have impacted negatively on the way Australian aid is given (Day 2016, p. 6). The second paper to address the issue of recent changes (Rosser 2016) does not have the changes as a central focus, but rather discusses them alongside changes to Australian aid and foreign policy that have arisen as a result of development changes in parts of Asia. Rosser's overarching argument is that, in the medium term, there are clear ideological and structural bounds to aid policy in Australia. These bounds have meant that while there have been some differences between Labor and Coalition government aid policy, the differences have been ‘at the margins’ (Rosser 2016, p. 119 emphasis in original). However, by the standards of this constrained policy space, Rosser sees the post‐2013 changes as significant: Rosser's argument, for what it is worth, is that the changes have been partially prompted by the rise of China as a foreign policy presence. While these moves undoubtedly reflect long‐held Coalition views that aid policy should be subordinated to the national interest and possibly a desire to enhance DFAT funding, they mark a dramatic shift, not only from Labor's aid policies, but also their own during the Howard years. (Rosser 2016, p. 129) Unlike Rosser, the third paper to have focused on Australian aid changes, which is that of Corbett and Dinnen (2016), does focus exclusively on the post‐2013 changes to Australian aid. Corbett and Dinnen contend that the recent changes to Australian government aid do not warrant Minister Bishop's claims of a ‘New Aid Paradigm’ if the term ‘paradigm’ is interpreted as it was by Thomas Kuhn in his work on scientific knowledge (Kuhn 1962, cited by Corbett & Dinnen on p. 88). In practical terms, this seems to mean that Corbett and Dinnen view the changes to aid policy post‐2013 as comparatively minor when set amongst the universe of potential approaches to aid that Bishop could have chosen from. Reflecting this, the authors contend that there is considerable continuity, both in Australian aid policy over time and between Australian aid policy and global aid policy. To Corbett and Dinnen, the change that occurred post‐2013 was only an ‘incremental change’ (Corbett & Dinnen 2016, p. 99). Moreover, they contend that the so‐called New Aid Paradigm has largely represented a reversion to long‐held views about aid and development, rather than the arrival of anything new (Corbett & Dinnen 2016, pp. 93–94). Although, in the case of budget cuts, they appear to be in accordance with the other authors. Although there are differences in the degree to which the different authors attempt to draw upon empirical evidence (with Day being the most empirically oriented), a clear challenge that all three papers face is that, with the exception of aid cuts, and the simple fact of the integration, thus far there has been little primary empirical evidence available to help gauge the impact of the post‐2013 changes. Indeed, gathering such information is not easy. However, attempts have been undertaken internationally to try to systematically gather data on the quality of aid programs, both for the sake of international comparison and for the sake of tracking changes over time. It is to this work that we now turn. Internationally, three academic research teams and one think tank have undertaken sophisticated work attempting to systematically measure aid quality (Center for Global Development & Brookings Institution 2014; Custer et al. 2015; Easterly & Pfutze 2008; Easterly & Williamson 2011; Knack et al. 2011). In instances, this work has involved more than one iteration. None of this work has specifically focused on Australia, yet all of the studies have included Australia in their assessments. The work in question has involved two different methods: two of the academic teams (those lead by Easterly and by Knack), as well as the Center for Global Development and Brookings partnership, have almost exclusively made use of publicly available data or their own observations to create indices of quality; the final academic team (that led by Custer) drew upon their own survey data gathered from the surveyed views of officials in recipient countries. In addition to these works, a number of papers have used simple proxies of aid quality (or something similar to it) as independent or dependent variables in regression analysis. While this work is valuable in its own right, we have not included it here because the indices we cover in this literature review provide much more sophisticated takes on the issue of aid quality. Also, there are some studies of aid agency performance available in the grey literature, particularly Organisation for Economic Cooperation and Development (OECD) Development Assistance Committee ‘peer reviews’ of country aid programs. While these are valuable for researchers and aid programs themselves, they tend to be very gentle in their critiques of aid programs and for this reason are of limited utility to someone who wishes to systematically study aid quality. In Australia's case, there has not been a full peer review since 2013, which makes intertemporal comparison involving the recent changes impossible. This is also the case with reviews of the Australian Government Aid Program that has been commissioned by the government itself (for the most recent of these, see Hollway et al. 2011). The indices‐based work is of varying sophistication, and the different indices draw upon differing data sources. All three make use of donors' aid data as reported to the OECD (how much donor countries give, to which countries they give and other specific attributes of the spending, such as the sectors aid is spent on). In addition to this, the work of Easterly and his two different co‐authors draws heavily on their assessment of the information available on aid donor websites, while the Knack and Center for Global Development indices draw on surveys of donor practices (typically those conducted by the OECD). For reasons of space, in this section, we only describe the work of Easterly and his collaborators in depth. This particular work is chosen because it provides an easily explicable example of the approach. It should be noted that some critiques that can be made of Easterly's work are not applicable to the other authors. Reflecting this, when we discuss the shortcomings of the use of indices, we only comment on shortcomings that could be applied to all work in this genre. The work of Easterly and his co‐authors has focused on transparency, administrative costs, fragmentation, selectivity and use of ineffective aid channels. Transparency was measured by the aid programs' reporting to the OECD, available information on aid programs' websites and whether aid programs would release information when contacted. Overheads were measured by administrative costs as a share of Official Development Assistance (ODA) spent, staffing costs as a share of ODA spent and staff numbers relative to ODA spending. Fragmentation was measured as the extent to which an individual donor fragmented its aid across recipient countries and sectors. Selectivity was measured as the extent to which donors focused their aid on low income countries, well‐governed countries and democratic countries. Finally, aid given through inefficient channels was measured as the share of aid given as tied aid, food aid or as technical assistance (Easterly & Pfutze 2008; Easterly & Williamson 2011). Indices created in this manner can prove useful, especially for approximate cross‐country comparisons. However, this method suffers two types of limitation. The first is that indices tend to simplify what good practice is, stripping out (to varying degrees) crucial contextual information. For example, Easterly and his co‐authors penalise countries for fragmenting their aid across different sectors. This may be reasonable as a broad principle, but it also leads to situations such as Easterly and Williamson's assessment of sectoral change in New Zealand aid over time (2011, p. 1941): The approach of Knack et al. is more nuanced than the other two groups, allowing for some mitigating factors. However, the New Zealand example would have still been scored as deterioration using Knack's methods. For example, in 1999 New Zealand concentrated 32% of its aid to post‐secondary education; however, over the past nine years, New Zealand has fragmented its aid among more sectors with no sector receiving more than 12% in 2008, and most much less. This might seem like a trend of considerable deterioration. Yet New Zealand's aid was so concentrated in 1999 because it gave most of its aid as tertiary scholarships, which were of questionable developmental merit but which served New Zealand's foreign policy objectives, a fact that had been noted critically both in OECD and New Zealand Government Reviews (Ministerial Review Team 2001, p. 5). Subsequent fragmentation was a product of New Zealand creating a specialised government aid program and starting to more seriously focus on recipient country needs—hardly a deterioration in aid quality. The existing aid quality indices also miss much that is important. For example, although all of the indices are concerned with transparency, none would have picked up the fact that the quantity and quality of aid project information on the Australian Government Aid Program website decreased after 2013 (DeCourcy & Burkot 2016), or that between 2013 and 2015 the detail provided in Australian aid budgets declined substantially before improving again (Howes 2015). Such limitations are inevitable in any cross‐country aid quality quantification undertaking based on publicly available data. The quantity of countries involved will always pull against subtlety and detail. This is not a flaw in the index approach as such, but it does mean that someone wishing to gauge the extent of change in Australian aid since 2013, and to understand exactly where change has occurred, will gain only very limited insights from existing indices. (Another issue is that none of the indices have been updated to contain anything more recent than 2012 data.) An alternative to indices can be found in the work of the Listening to Leaders research team (Custer et al. 2015). This work has involved a large survey of senior government employees in developing countries. Amongst other focus points of the study were questions on the quality and utility of donor country advice and the extent to which survey respondents felt inclined to engage with specific donors. The dataset produced is rich and provides much fruitful material for analysis. Indeed, the process of gathering data via surveying stakeholders is effectively the same as that we use in our work (although we survey a different set of stakeholders). However, for someone wishing to understand changes to Australian aid since 2015, the Listening to Leaders work has two major limitations: first, it was a one‐off study, which eliminates the potential for intertemporal comparisons; and second, the data are primarily focused on a very specific subset of donor‐recipient interactions, which means that important areas such as donor transparency, donor interactions with NGOs and donors' development focus are not covered.

Methods and Data

In order to capture both the overall extent of change in the Australian Government Aid Program and the details of specific changes, we draw on two systematic surveys conducted of stakeholders of the Australian Government Aid Program. The first of these surveys was conducted in 2013 prior to the election of that year and the subsequent changes in the aid program. The second was conducted in the second half of 2015, after the reintegration of the aid program into DFAT, after the changes in the focus of the aid program were instituted and after most of the cuts to the aid program's budget had occurred. To the greatest extent possible, both in terms of sampling and the questions used, the 2015 survey followed the same methodology used to conduct the 2013 Australian Aid Stakeholder Survey. This has allowed for comparison between the 2 years. Both years' surveys were conducted in two phases. The first phase involved targeting a population of expert stakeholders: senior managers of Australian NGOs and development contractors. The targeted experts involved all Australia's larger NGOs and contracting firms as well as a random sample of smaller NGOs. Targeted experts were emailed a link to an online survey questionnaire, and repeated follow‐up was used to achieve as high a completion rate as possible. In 2015, this phase ran from 6 July until 6 October. In 2015, 155 stakeholders were targeted in this phase. The response rate was 64 per cent for NGOs and 85 per cent for development contractors. In 2013, the same phase ran from 17 June until 31 August, and 148 expert stakeholders were targeted. The response rate was 65 per cent for NGOs and 84 per cent for contractors. The second phase of the survey was open to the public and advertised through the Australian National University's Development Policy Centre website and blog and through associated development networks. Because of the risk of selection bias, we do not draw on data from the second phase in this article; however, overall responses to the second phase were similar to those of Phase 1, and all Phase 2 data are available online. Both Phase 1 NGO and Phase 1 contractor data are included in the following analysis. While it is possible to disaggregate the two groups, for reasons of space, we have reported on them together. Interestingly, while responses to different questions differed somewhat between NGOs and contractors, their overall assessment of the aid program in 2015 was similar. It is important to note that the Stakeholder Survey data only capture perceptions of change, not change itself. This means that Stakeholder Survey data are only useful to the extent that we can be confident that stakeholders' responses are not significantly biased. Because of this, we considered and checked for possible sources of bias as we analysed the data. One possible bias in the 2015 data is ideology and a dislike of the Coalition government. Such a source of bias is plausible, but we do not think it a likely issue in practice. On average, in 2015, private sector contractors offered assessments that were as negative as those from NGO staff. If all our negative assessments had come from NGOs, we would have been more worried about ideological bias; however, this was not the case. Moreover, as can be seen in the online data, stakeholders' assessments of Foreign Minister Julie Bishop were largely positive, an unlikely outcome if stakeholders were all ideologically opposed to the current government. Another possible source of bias was the aid cuts in the 2015 federal budget, which may have unduly influenced stakeholders' assessments of unrelated areas such as aid program effectiveness. This is a potential issue; however, once again, there are good reasons to believe views about aid cuts have not markedly skewed other assessments. Stakeholders' appraisals highlighted particularly large deteriorations in specific areas such as transparency, expertise of aid program staff and communications. The pronounced deterioration of these areas relative to other attributes asked about is unlikely if stakeholders' concerns were solely budgetary. Moreover, when we tested to see which aid program attributes were most strongly correlated with stakeholders' overall assessment of whether the aid program was becoming more or less effective, we found stakeholders' responses to the question we asked about funding to be less strongly correlated with views on changing overall effectiveness than many other aid program attributes were (results available from the authors on request). This is hard to square with a situation in which stakeholders were so fixated on funding that it distorted their views on all aspects of performance. A further cause for confidence in stakeholders' assessments is that, in the case of transparency and communications, two areas where major deteriorations were reported by stakeholders, other non‐subjective empirical work has highlighted deteriorations since 2013 (Betteridge 2016; DeCourcy & Burkot 2016). Another limitation of the Stakeholder Survey data is that the Phase 1 data did not target aid program employees or key stakeholders in aid recipient countries. This is an acknowledged limitation and one we hope to address in future rounds of the survey. For now, it is worth noting that some Australian government staff and participants from recipient countries did take part in the second phase of the Stakeholder Survey, and that responses from Phase 2 were, as we noted earlier, similar to those from Phase 1. The 2013 and 2015 Australian Aid Stakeholder Surveys were the first of their kind in Australia. As far as we are aware, they are the first such donor surveys to have been conducted in this manner in any OECD country. (The Listening to Leaders survey described earlier was both multi‐country and conducted primarily in aid recipient countries.) All of the data from the Australian Aid Stakeholder Surveys are available online at https://devpolicy.crawford.anu.edu.au/aid‐stakeholder‐survey/2015. In 2015, we also conducted a stakeholder survey in New Zealand. The results of this survey are not covered here, but data from this survey are available online at the same location that the Australian data can be found.

Results

In this section, we draw on the data from the first phase of the Stakeholder Surveys (the targeted phase) to provide a sense of change in the aid program since 2013. First, we look at high‐level issues such as changes in the ethos of Australian aid and changes in the overall quality of the aid program, then we look at more specific aspects of aid program functioning.

Overall Quality

Both 2013 and 2015 respondents were asked to rate the overall effectiveness of the aid program. In both years, the plurality of respondents rated the aid program as effective, although the share of respondents who gave this response fell by over 8 percentage points between the 2 years (from 68 per cent to 60). This is a notable change. More striking though are the differences in responses to the question we asked about trends of improvement or deterioration in the aid program. Here, as can be seen in Figure 1, the shift is dramatic: in 2013, more than three quarters of respondents thought the aid program was becoming more effective; in 2015, three quarters of respondents thought it was becoming worse. Taken together, response to the questions about effectiveness shows clear deterioration in stakeholders' perceptions of the aid program. They do not point to a complete collapse in quality, but they do highlight a clear change in trend: a program that had been thought to be improving was, by 2015, changing for the worse.
Figure 1

Change in Effectiveness of Aid ProgramNote: Exact percentages for all figures can be found in the online data, which are linked to from the Methods section.

Change in Effectiveness of Aid ProgramNote: Exact percentages for all figures can be found in the online data, which are linked to from the Methods section.

The Purpose of Australian Aid

The extent to which donors actually give aid altruistically, rather than to advance their own interests, is a contested topic in aid research, with recent research suggesting that on average, aid donors are neither perfectly altruistic nor completely selfish (Heinrich 2013; Hoeffler & Outram 2011). In 2013 and 2015, we asked Stakeholder Survey participants to identify the relative importance that reducing poverty, advancing Australia's strategic interests and advancing Australia's commercial interests had in guiding the work of the Australian aid program. Figure 2 shows kernel density plots of stakeholders' responses to the poverty section of this question, comparing 2013 to 2015. As the figure shows, the typical stakeholder in 2015 thought the aid program had less of a poverty focus than was the case in 2013. In 2013, the most frequent response (given by 43 per cent of respondents) was that between 40 and 60 per cent of the emphasis of Australian aid was on poverty, rather than advancing Australia's interests. In 2015, the most frequent response (given by 54 per cent of respondents) was that just 10 to 30 per cent of the focus of Australian aid was on poverty.
Figure 2

Weight Placed on Poverty by the Australian Aid Program 2013 and 2015

Weight Placed on Poverty by the Australian Aid Program 2013 and 2015

What Aid Is Spent On

In addition to containing information about the overarching purpose of Australian aid, both the 2015 and 2013 Stakeholder Surveys contained questions about the types of work Australian aid was spent on, although some spending areas were described in different ways in the two different years. Figure 3 shows responses to the 2015 question. By and large, Figure 3 reveals a situation where most stakeholders were satisfied with the types of work the aid program is spending money on. The main exceptions to this are ‘health and education’ and ‘resilience and social protection’, which stakeholders think the government is not focused enough on, and ‘infrastructure and trade’, which stakeholders view as being on the receiving end of too much attention. By way of comparison, in 2013, only about a quarter of respondents thought health and education received too little attention. On the other hand, in 2013, 46 per cent of respondents thought too little weight was placed on sustainable economic development, a clear contrast with the 66 per cent of respondents in 2015 who thought too much weight was placed on the broadly analogous category of infrastructure and trade.
Figure 3

Weight Placed on Different Spending Areas, 2015

Weight Placed on Different Spending Areas, 2015

Changing Aid Program Attributes

Thus far, the aspects of the aid program that we have discussed have been high level. Capturing stakeholder perceptions at this level is useful; there are obvious practical reasons to be concerned about overall changes in an aid program's operation. However, beneath the headline shifts, there are many questions to be asked about the specifics of change; it is unlikely that all aspects of Australian aid have changed to equal degrees. Finding out exactly what has changed, either for the better or the worse, is important. Figure 4 compares the average scores of a suite of specific aid program attributes that we asked about in 2013 and 2015. Both axes have potential scales of zero to five. An attribute would score zero if all respondents gave it the lowest possible appraisal. An attribute would score five if all respondents gave it the highest possible appraisal. The dashed red line shows a one‐to‐one relationship. The further an attribute lies from the line, the larger its change was between 2013 and 2015. Attributes below the line deteriorated between 2013 and 2015. Attributes above the line improved.
Figure 4

Change in Individual Attributes from 2013 to 2015

This average was calculated as follows. Each respondent's response to a question was converted into numeric scales where the most negative possible response was scored one and the most positive possible response was scored five. The quantified responses were then averaged across respondents. More questions were asked about aid program attributes in the survey; however, we have restricted ourselves to this particular suite, because they were all drawn from the same section of the survey, and all have similarly scaled responses. Change in Individual Attributes from 2013 to 2015 Notably, there is a reasonable correlation between the 2 years (r = 0.59). Although there are some striking exceptions, for the most part, the aid program's strong points in 2013 were still its strong points in 2015. However, the majority of the attributes charted in Figure 4 lie below the one‐to‐one line, which suggests most areas of aid program performance have become worse, often substantially worse. Table 1 shows each attribute, its score in both years and the magnitude of change, as well as p‐values from t‐tests of the changes. In some instances, such as predictability of funding, deterioration is hardly surprising given the budget cuts. Many of the other changes, however, are not in areas directly related to spending.
Table 1

Change in Individual Attributes from 2013 to 2015

Attribute20132015Difference p‐value
Predictability of funding2.911.37−1.550.00
Transparency3.442.36−1.080.00
Strategic clarity3.522.62−0.900.00
Communication and community engagement2.832.12−0.720.00
Realism of expectations2.832.33−0.500.00
Staff expertise2.672.19−0.480.00
Selectivity/fragmentation2.812.36−0.450.00
Performance management and reporting3.252.82−0.430.00
Monitoring3.302.98−0.320.00
Evaluation2.962.78−0.180.00
Focus on results3.213.11−0.100.02
Partnerships2.982.89−0.090.02
Staff continuity1.511.46−0.050.15
Appropriate attitude to risk2.782.820.030.54
Avoid micromanagement2.372.440.070.26
Quick decision making1.962.170.210.05
Overall average2.832.42−0.41
P‐values come from a two‐tailed unequal variance t‐test, with a finite population correction applied to the standard errors. Because sampling was non‐random, the p‐value should be used only as a heuristic. Change in Individual Attributes from 2013 to 2015 The deterioration in many of the attributes is marked. And yet it is not equal across the board. In areas ranging from transparency to staff expertise, the fall in assessment across the 2 years is clear. Yet in other areas, such as staff continuity, the shift is small enough to be effectively indistinguishable from zero, while quick decision‐making was actually assessed more positively in 2015. There were also improvements in attitudes to risk and avoidance of micromanagement, but these were not statistically significant. Other than predictability of funding, transparency was the attribute that fell the most. As Figure 5 shows, fewer than a quarter of stakeholders thought transparency was a weakness or a great weakness in 2013, yet by 2015, 58 per cent did. The reasons for stakeholders' concerns were readily apparent to anyone paying attention to the aid program. At that point in time, aid activity data on the website were diminished, historical time series data were not being updated on the website and aid information released alongside the federal budget was much less detailed than it had been. Since that nadir, which coincided with the 2015 Stakeholder Survey, information availability has improved again, although a recent detailed audit of aid program transparency shows that it has not yet reached 2013 standards (DeCourcy & Burkot 2016).
Figure 5

Aid Program Transparency

Aid Program Transparency As Figure 6 shows, one area where there was almost no change according to stakeholders was an area of perennial weakness for the Australian Government Aid Program: staff continuity. Staff continuity was the lowest scoring attribute in 2013 and only funding predictability scored worse in 2015. At 1.46 on a scale of one to five, the 2015 score is hardly cause for celebration. Nevertheless, given the integration of the aid program into DFAT and the staffing changes it brought, one might well have anticipated deterioration on this measure. That such deterioration is not apparent points to the sometimes‐surprising effects of the 2013 change of government on Australian aid. The new government promised increased transparency, and yet it became much worse. The aid program was fully integrated into DFAT, an event that was surely disruptive for staff, and yet from the perspective of stakeholders, staff continuity continued much as it had before.
Figure 6

Staff Continuity and Expertise

Staff Continuity and Expertise On the other hand, stakeholders did notice a large drop in staff expertise. Staff expertise had not been assessed particularly kindly by stakeholders in 2013; however, it was viewed very poorly in 2015. The issue of staff expertise also came up repeatedly in open‐ended questions included in the Stakeholder Survey. Here, a number of respondents argued that the loss of staff expertise was not only a product of a large number of AusAID staff resigning or accepting redundancy after the merger but also a consequence of DFAT failing to value development expertise. One respondent, for example, raised the issue of ‘the marked devaluation of aid program management skills and the lack of recognition in DFAT senior management of the depth of expertise required’. Interestingly, in some areas of aid program work, such as evaluations, and a results focus, where one might have imagined a fall in expertise bringing a commensurate fall in performance, stakeholders' assessments suggest this has not been the case (both attributes became worse, but not dramatically worse). Integration appears to have had an overarching impact on staff expertise, but the impact of the loss of expertise itself has not been uniform.

Discussion

Assessing the quality of an aid program is not an easy task. Aid programs are multifaceted, with many features contributing to their performance. Moreover, the state of any individual aid program is a product of choices that range from high‐level decisions about the ethos of aid giving to seemingly technical decisions about organisational structure, to subtle facts such as whether the cultures of particular government departments value aid expertise. Such subtlety appears beyond the ability of existing aid quality indices to capture. In this article, we have demonstrated how a different approach, the surveying of key aid stakeholders, can produce nuanced empirical grounding for assessing not only the state of aid programs but also the extent to which they are changing over time. The data from the 2015 Stakeholder Survey provide good evidence to bolster the arguments of those academic studies that have treated the post‐2013 changes in Australian aid as an outlier type event—an instance of change that was far from the norm. It could also be the case, as Corbett and Dinnen contend, that the government's New Aid Paradigm is associated with ideas that are not necessarily new in a historical sense. However, the New Aid Paradigm has clearly been associated with substantial, and important, change within the last 3 years. Stakeholders' views paint a picture of deteriorating overall aid program effectiveness. Stakeholders' responses also suggest a change in the ethos of Australian aid giving, as well as changes to the sectoral make up of Australian aid spending and to specific attributes of the aid program. This much seems clear. What we are less certain of is what will occur over the coming years. One day prior to the release of the 2015 Stakeholder Survey, the then Secretary of the Department of Foreign Affairs and Trade, Peter Varghese, spoke at the 2016 Australasian Aid Conference. In his speech, where he emphasised the importance of development knowledge, he stated that The reduction in size of the aid budget required a reduction in the size of the workforce to manage it. But we recognise that delivering a high‐quality aid program requires a strong mix of generalist and specialist skills. This is why we are strengthening our workforce planning to enable us to recruit and retain development professionals and sector experts. We are taking steps to improve our knowledge capture and transfer between staff, and to refine our extensive program of training and mentoring of DFAT staff. (Varghese 2016) If these claims translate into substantial effort, a major area of deterioration identified by stakeholders will have been begun to be addressed. Similarly, aspects of transparency, as we noted earlier, have begun to improve since 2015. The amount of information available on the DFAT website is still some way short of that available from AusAID, but improvements are occurring. At this point in time, neither the Australian Labor Party nor the current Liberal‐National Coalition government is talking of recreating an independent aid agency akin to AusAID, nor are the two major parties proposing aid spending increases over coming years that would be large enough to offset the 2015–16 aid cuts (Davies 2016). Some of the post‐2013 changes seem like they will not be reversed in the foreseeable future. On the other hand, as we showed in Figure 4, the relative strengths and weaknesses of the aid program were not entirely overturned by the post‐2013 changes. And while there was clear deterioration in some areas, in other areas, attributes did not deteriorate dramatically. Moreover, in some areas where it had deteriorated, aid quality is improving again now. Together, these facts point to the possibility that, even if its form is forever changed, over time, the overall functioning of the Australian Government Aid Program may start to revert to a pre‐2013 state, at least to a degree. For now, the extent to which this will occur is uncertain. More certain in the future will be the need for a robust empirical base that allows for the ongoing assessment of Australian aid. We have shown in this article how one type of evidence—the perceptions of aid stakeholders—can contribute to academic work that hinges on understanding the changes. It is not the only evidence that should be used in such work, but for a phenomenon as complex as aid, the views of insiders and experts are indispensable to inform not only academic debate but also policy decisions.
  1 in total

1.  Gauging Change in Australian Aid: Stakeholder Perceptions of the Government Aid Program.

Authors:  Terence Wood; Camilla Burkot; Stephen Howes
Journal:  Asia Pac Policy Stud       Date:  2017-03-14
  1 in total
  3 in total

1.  Gauging Change in Australian Aid: Stakeholder Perceptions of the Government Aid Program.

Authors:  Terence Wood; Camilla Burkot; Stephen Howes
Journal:  Asia Pac Policy Stud       Date:  2017-03-14

2.  What parliamentarians think about Australia's post-COVID-19 aid program: The emerging 'cautious consensus' in Australian aid.

Authors:  Benjamin Day; Tamas Wells
Journal:  Asia Pac Policy Stud       Date:  2021-11-01

3.  Aid Policy and Australian Public Opinion.

Authors:  Terence Wood
Journal:  Asia Pac Policy Stud       Date:  2018-04-16
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.