Get to know our work

Publications

Here’s a selection of outputs that have come out of the grant so far. Get in touch if you’d like to learn more.


  • Download it here.

    A major shift in the research sector has been the increased expectation from policymakers and funders that academic research should yield some socioeconomic benefits or ‘impacts’ rather than merely new knowledge. In this paper, we explore the role that impact has in academics’ motivations and values and how impact is being integrated into academics’ core functions of research and education. We do this through in-depth interviews (n = 60) with scientists who work on the development or application of artificial intelligence (AI), broadly defined. This AI’s focus situates our participants within a strategically important, high-priority area of research for all three national contexts included in our study—Australia, the UK and the USA. Our findings reveal that the impact mission has become central to understanding the motivations and values of academics, but unevenly. We identify divergence between those who work on AI from a foundational computer science perspective and those who develop and apply AI within other scientific domains. The two groups have different understandings of key notions such as ‘impact’ and ‘applied research’, as well as different ways of integrating the impact agenda into their research and education activities. The study highlights the importance of flexible approaches to research policy and governance that are based on a deeper understanding of what motivates researchers, and that take into account academics’ educational role. Greater holistic understanding of how academic identities and practices are accommodating the impact agenda is essential to maximise synergy across activities and avoid unintended consequences.

  • Forthcoming

    This article considers how the emerging Artificial Intelligence (AI) research field is constructed, primarily in university settings. AI research is a site of significant national funding, industry investment, and media interest. As such, for researchers working across the resource-constrained science system, their relationship to the field is significant; legitimisation as an AI researcher can bring material and symbolic rewards. Through interviews (n = 90) with academics affiliated with AI-branded research organisations in the US, UK, and Australia, the article develops an empirical account of the construction of AI research as a high dimensional field—a field that moves between multiple disciplinary and sectoral boundaries across national and international hierarchies. The article draws on the sociology of expertise and studies of research infrastructures to develop the conceptual frame of dimensionality to explain the vertical and horizontal dynamics informing the AI field’s development. The article’s contributions are its description of the emerging AI field, which complements critical studies of how the figure of AI is mobilised in other settings, and its extension of field theory to fluid spaces that leverage the boundary zone between several overlapping field arrangements.

  • Download here

    Research in the global field of artificial intelligence is increasingly hybrid in orientation. Researchers are beholden to the requirements of multiple intersecting spheres, such as scholarly, public, and commercial, each with their own language and logic. Relatedly, collaboration across disciplinary, sector and national borders is increasingly expected, or required. Using a dataset of 93,482 artificial intelligence publications, this article operationalises scholarly, public, and commercial spheres through citations, news mentions, and patent mentions, respectively. High performing publications (99th percentile) for each metric were separated into eight categories of influence. These comprised four blended categories of influence (news, patents and citations; news and patents; news and citations; patents and citations) and three single categories of influence (citations; news; patents), in addition to the ‘Other’ category of non-high performing publications. The article develops and applies two components of a new hybridity lens: evaluative hybridity and generative hybridity. Using multinomial logistic regression, selected aspects of knowledge production – research context, focus, artefacts, and collaborative configurations – were examined. The results elucidate key characteristics of knowledge production in the artificial intelligence field and demonstrate the utility of the proposed lens.

  • Download here

    The objective of this study is to investigate the application of machine learning techniques to the large-scale human expert evaluation of the impact of academic research. Using publicly available impact case study data from the UK’s Research Excellence Framework (2014), we trained five machine learning models on a range of qualitative and quantitative features, including institution, discipline, narrative style (explicit and implicit), and bibliometric and policy indicators. Our work makes two key contributions. Based on the accuracy metric in predicting high- and low-scoring impact case studies, it shows that machine learning models are able to process information to make decisions that resemble those of expert evaluators. It also provides insights into the characteristics of impact case studies that would be favoured if a machine learning approach was applied for their automated assessment. The results of the experiments showed strong influence of institutional context, selected metrics of narrative style, as well as the uptake of research by policy and academic audiences. Overall, the study demonstrates promise for a shift from descriptive to predictive analysis, but suggests caution around the use of machine learning for the assessment of impact case studies.

  • Download here

    A key goal of public policy and public administration research is to inform policy decisions. It is not clear, however, to what extent this is the case. In this study, therefore, citations from policy documents to public policy and administration research were analyzed to identify which research contributed most to policy reports and decisions. Additionally, we identified which policy institutions used research literature more than others to justify their policy decisions. Our findings show that think tanks use public policy and administration research literature more often than governmental organizations when justifying policy reports and decisions.

  • Download here

    There is no singular way of measuring the value of research. There are multiple criteria of evaluation given by different fields, including academia but also others, such as policy, media and application. One measure of value within the academy is citations, while indications of wider value are now offered by altmetrics. This study investigates research value using a novel design focusing on the World Bank, which illuminates the complex relationship between valuations given by metrics and by peer review. Three theoretical categories, representing the most extreme examples of value, were identified: ‘exceptionals,’ highest in both citations and altmetrics; ‘scholars’, highest in citations and lowest in altmetrics and ‘influencers’, highest in altmetrics and lowest in citations. Qualitative analysis of 18 interviews using abstracts from each category revealed key differences in ascribed characteristics and judgments. This article provides a novel conception of research value across fields.

  • Download here

    Before problems can be solved, they must be defined. In global public policy, problems are defined in large part by institutions like the World Bank, whose research shapes our collective understanding of social and economic issues. This article examines how research is produced at the World Bank and deemed to be worthwhile and legitimate. Creating and capturing research on global policy problems requires organisational configurations that operate at the intersection of multiple fields. Drawing on an in-depth study of the World Bank research department, this article outlines the structures and technologies of evaluation (i.e., the measurements and procedures used in performance reviews and promotions) and the social and cultural processes (i.e., the spoken and unspoken things that matter) in producing valuable policy research. It develops a theoretically informed account of how the conditions of measurement and evaluation shape the production of knowledge at a dominant multilateral agency. In turn, it unpacks how the internal workings of organisations can shape broader epistemic infrastructures around global policy problems.

  • Download here

    Academics undertaking public policy research are committed to tackling interesting questions driven by curiosity, but they generally also want their research to have an impact on government, service delivery, or public debate. Yet our ability to capture the impact of this research is limited because impact is under-theorised, and current systems of research impact evaluation do not allow for multiple or changing research goals. This article develops a conceptual framework for understanding, measuring, and encouraging research impact for those who seek to produce research that speaks to multiple audiences. The framework brings together message, medium, audience, engagement, impact, evaluation, and affordance within the logics of different fields. It sets out a new way of considering research goals, measurements, and incentives in an integrated way. By accounting for the logics of different fields, which encompass disciplinary, institutional, and intrinsic factors, the framework provides a new way of harnessing measurements and incentives towards fruitful learning about the contribution diverse types of public policy research can make to wider impact.

    Read the paper here