A Necessary Pause: Responsibility, AI, and the Year Ahead

I recently received feedback on an early draft of my PhD introduction suggesting that my writing needs to move towards a more academic register. I think this is a fair critique, and one I’m committed to working on as I begin developing my first paper.

That said, rereading what I wrote, I still feel it captures something important about the moment we find ourselves in. This post reflects that perspective, sitting deliberately at the intersection of industry practice and academic inquiry, as I look to push on across both.

I’d be keen to hear what others think and to engage in thoughtful discussion around Responsible AI.

If you close your eyes and listen closely enough, you will hear the sound of Artificial Intelligence (AI) slowly being interwoven into our way of living. AI is no longer something imagined in some mythical science fiction fantasy, it is here and, for many, available at the mere touch of a button. That said, in as many ways as AI is transforming our day-to-day lives, the motives driving the explosion of AI are not. Humans have a long history of seeking automation and decision support to reduce labour and cognitive burden. From mechanical clocks, water and windmills, and industrial assembly lines seeking to support practical tasks, to naval navigation charts, cipher machines, and early computer systems, when considering complex tasks, societies are constantly looking to augment human limitations.

In many ways, it could be argued that nothing has changed. Humanity is still striving to push the boundaries of what can be achieved to support human endeavour. Indeed, data availability, compute power, algorithmic advances, economic incentives, and deployment scalability are pushing an AI evolution (Dhar, 2023) at breakneck speed. With these advances sparking the emergence of powerful generative and agentic AI capabilities at our fingertips, both opportunities and risks are magnified. There are many pervasive opportunities for AI to be used for good, with Floridi and Cowls (2019) espousing the utmost importance of using AI to benefit the wellbeing of people and the planet. For example, within the healthcare sector, Qian et al. (2024) contended that diagnosing diseases, spotting malignant tumours, and drug discovery are tasks where AI can, and is being, leveraged and, in some instances, that AI may outperform healthcare professionals. For all the potential good AI can bring, there is a flip side to this coin. Pulling on the thread that AI might outperform healthcare professionals, Bughin (2023) talked of AI adoption driving broader holistic uproar, including an existential debate around employment. Sundar Pichai, the CEO of Google, cut straight to the heart of this when he stated that ‘AI won’t replace humans, but humans with AI will replace humans without AI’. These examples are just the tip of the iceberg, but provide an insight into this constant tension, the push and pull between AI risks and opportunities, that ideas centred around AI and autonomy are pushed to the fore.

In adopting AI, we openly acknowledge and cede some of our decision-making authority to these technology systems (Floridi and Cowls, 2019). In many ways, especially in the casual way most humans use AI tools, it is a tacit handshake, if you will. That said, this unspoken deal involves willingly giving up a degree of human autonomy, which poses some interesting and contentious ethical, moral, and legal debates, particularly as the ‘stakes’ become greater. It is in these debates that actionable phrases such as ‘human in the loop’, referring to the concept of integrating human accountability into AI systems to ensure decisions align with human values and ethical standards, have become the norm.

Taking ideas around ethical, moral, and legal implications a step further, the emergence of new and seemingly endless potential AI capabilities poses an almost ‘Sword of Damocles’ paradox for those empowered to govern AI. The burden of responsibility is unyielding and, although the juxtaposition between the need to innovate and the need to safeguard and/or control risk is not new (cf. Knoppers and Thorogood, 2017), for many, the stakes when considering AI are greater than ever. It is in this tension that the idea of responsible AI (RAI) becomes important (Morley et al., 2020). There is a growing body of research whereby theory grounded in ideas of ‘responsibility’ is at the forefront of the mind when considering governance policies associated with AI. Indeed, Papagiannidis et al. (2025), in reviewing the responsible AI governance literature, noted that international bodies the world over are continuing to drive towards the development of a set of responsible AI principles. Although still somewhat muddled (Floridi and Cowls, 2019), these principles persist as an ethical bedrock, even as organisations across a myriad of sectors struggle to translate them into meaningful operational practice, in the pursuit of AI-driven advantages (Rakova et al., 2021).

Bughin, J. (2023). Does artificial intelligence kill employment growth: the missing link of corporate AI posture. Frontiers in Artificial Intelligence, 6. https://doi.org/10.3389/frai.2023.1239466

Dhar, V. (2023). The Paradigm Shifts in Artificial Intelligence.

Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1

Knoppers, B. M., & Thorogood, A. M. (2017). Ethics and big data in health. In Current Opinion in Systems Biology (Vol. 4, pp. 53–57). Elsevier Ltd. https://doi.org/10.1016/j.coisb.2017.07.001

Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5

Papagiannidis, E., Mikalef, P., & Conboy, K. (2025). Responsible Artificial Intelligence Governance: A Review and Research Framework. In Journal of Strategic Information Systems (Vol. 34, Issue 2). Elsevier B.V. https://doi.org/10.1016/j.jsis.2024.101885

Qian, Y., Siau, K. L., & Nah, F. F. (2024). Societal impacts of artificial intelligence: Ethical, legal, and governance issues. Societal Impacts, 3, 100040. https://doi.org/10.1016/j.socimp.2024.100040

Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2021). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. In Proceedings of the ACM on Human-Computer Interaction (Vol. 5, Issue CSCW1). Association for Computing Machinery. https://doi.org/10.1145/3449081

Next
Next

Reflections on the All4People Summit 2025 – Advancing Ethical AI Governance