A Necessary Pause: Responsibility, AI, and the Year Ahead
I recently received feedback on an early draft of my PhD introduction suggesting that my writing needs to move towards a more academic register. I think this is a fair critique, and one I’m committed to working on as I begin developing my first paper.
That said, rereading what I wrote, I still feel it captures something important about the moment we find ourselves in. This post reflects that perspective, sitting deliberately at the intersection of industry practice and academic inquiry, as I look to push on across both.
I’d be keen to hear what others think and to engage in thoughtful discussion around Responsible AI.
If you close your eyes and listen closely enough, you will hear the sound of Artificial Intelligence (AI) slowly being interwoven into our way of living. AI is no longer something imagined in some mythical science fiction fantasy, it is here and, for many, available at the mere touch of a button. That said, in as many ways as AI is transforming our day-to-day lives, the motives driving the explosion of AI are not. Humans have a long history of seeking automation and decision support to reduce labour and cognitive burden. From mechanical clocks, water and windmills, and industrial assembly lines seeking to support practical tasks, to naval navigation charts, cipher machines, and early computer systems, when considering complex tasks, societies are constantly looking to augment human limitations.
In many ways, it could be argued that nothing has changed. Humanity is still striving to push the boundaries of what can be achieved to support human endeavour. Indeed, data availability, compute power, algorithmic advances, economic incentives, and deployment scalability are pushing an AI evolution (Dhar, 2023) at breakneck speed. With these advances sparking the emergence of powerful generative and agentic AI capabilities at our fingertips, both opportunities and risks are magnified. There are many pervasive opportunities for AI to be used for good, with Floridi and Cowls (2019) espousing the utmost importance of using AI to benefit the wellbeing of people and the planet. For example, within the healthcare sector, Qian et al. (2024) contended that diagnosing diseases, spotting malignant tumours, and drug discovery are tasks where AI can, and is being, leveraged and, in some instances, that AI may outperform healthcare professionals. For all the potential good AI can bring, there is a flip side to this coin. Pulling on the thread that AI might outperform healthcare professionals, Bughin (2023) talked of AI adoption driving broader holistic uproar, including an existential debate around employment. Sundar Pichai, the CEO of Google, cut straight to the heart of this when he stated that ‘AI won’t replace humans, but humans with AI will replace humans without AI’. These examples are just the tip of the iceberg, but provide an insight into this constant tension, the push and pull between AI risks and opportunities, that ideas centred around AI and autonomy are pushed to the fore.
In adopting AI, we openly acknowledge and cede some of our decision-making authority to these technology systems (Floridi and Cowls, 2019). In many ways, especially in the casual way most humans use AI tools, it is a tacit handshake, if you will. That said, this unspoken deal involves willingly giving up a degree of human autonomy, which poses some interesting and contentious ethical, moral, and legal debates, particularly as the ‘stakes’ become greater. It is in these debates that actionable phrases such as ‘human in the loop’, referring to the concept of integrating human accountability into AI systems to ensure decisions align with human values and ethical standards, have become the norm.
Taking ideas around ethical, moral, and legal implications a step further, the emergence of new and seemingly endless potential AI capabilities poses an almost ‘Sword of Damocles’ paradox for those empowered to govern AI. The burden of responsibility is unyielding and, although the juxtaposition between the need to innovate and the need to safeguard and/or control risk is not new (cf. Knoppers and Thorogood, 2017), for many, the stakes when considering AI are greater than ever. It is in this tension that the idea of responsible AI (RAI) becomes important (Morley et al., 2020). There is a growing body of research whereby theory grounded in ideas of ‘responsibility’ is at the forefront of the mind when considering governance policies associated with AI. Indeed, Papagiannidis et al. (2025), in reviewing the responsible AI governance literature, noted that international bodies the world over are continuing to drive towards the development of a set of responsible AI principles. Although still somewhat muddled (Floridi and Cowls, 2019), these principles persist as an ethical bedrock, even as organisations across a myriad of sectors struggle to translate them into meaningful operational practice, in the pursuit of AI-driven advantages (Rakova et al., 2021).
Bughin, J. (2023). Does artificial intelligence kill employment growth: the missing link of corporate AI posture. Frontiers in Artificial Intelligence, 6. https://doi.org/10.3389/frai.2023.1239466
Dhar, V. (2023). The Paradigm Shifts in Artificial Intelligence.
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1
Knoppers, B. M., & Thorogood, A. M. (2017). Ethics and big data in health. In Current Opinion in Systems Biology (Vol. 4, pp. 53–57). Elsevier Ltd. https://doi.org/10.1016/j.coisb.2017.07.001
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5
Papagiannidis, E., Mikalef, P., & Conboy, K. (2025). Responsible Artificial Intelligence Governance: A Review and Research Framework. In Journal of Strategic Information Systems (Vol. 34, Issue 2). Elsevier B.V. https://doi.org/10.1016/j.jsis.2024.101885
Qian, Y., Siau, K. L., & Nah, F. F. (2024). Societal impacts of artificial intelligence: Ethical, legal, and governance issues. Societal Impacts, 3, 100040. https://doi.org/10.1016/j.socimp.2024.100040
Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2021). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. In Proceedings of the ACM on Human-Computer Interaction (Vol. 5, Issue CSCW1). Association for Computing Machinery. https://doi.org/10.1145/3449081
Reflections on the All4People Summit 2025 – Advancing Ethical AI Governance
It all begins with an idea.
Reflections on the All4People Summit 2025 – Advancing Ethical AI Governance
The All4People Summit 2025 arrived at a pivotal moment in the global conversation around AI governance. Coinciding with the release of the AI4People Playbook, the event brought together leaders from academia, industry, policy and civil society. What stood out across the opening panel, the Playbook launch, a working session on building a trustworthy AI-enabled future and discussions on global governance. was not simply expertise, but a sense of shared responsibility and urgency.
Despite the range of backgrounds represented, the room felt aligned on one point, ethical AI governance is ultimately a question about the kind of society we want to build.
Governing AI as Governing Ourselves
Professor Virginia Dignum’s framing captured this perfectly. Two remarks deeply resonated with me:
‘Every AI system is a reflection of society.’
‘When we govern AI, we are governing ourselves.’
These ideas cut through both the hype and the fatalism surrounding AI. They reject the common narrative that imagines AI as an autonomous threat, a kind of ‘Skynet’ waiting to slip its leash. Instead, her comments located risk in a much more grounded space. They spoke to the incentives we create, the safeguards we choose and the values we embed into our institutions.
If AI reflects society, then the governance debate is not about controlling an alien intelligence. It is about holding a mirror up to the systems we have already built and asking whether they produce the outcomes we want.
Beyond the AI Act: The Case for Pragmatism
Dame Wendy Hall added an important complement to this perspective. While the EU AI Act continues to shape global regulatory thinking, she argued that we now need more pragmatic approaches that are adaptable across regions, sectors and values.
This is the unglamorous reality of governance, in that principles only matter if they can be implemented. Different countries have different regulatory cultures, different institutions and different tolerances for risk. Effective AI governance must therefore be both principled and practical. The summit made clear that flexibility does not mean dilution. Rather, it means recognising that governance is a living ecosystem, not a static rulebook.
Trust as a Socio-Technical Construct
One of the most rewarding sessions explored what it means to build a trustworthy AI-enabled future. A socio-technical perspective dominated the conversation, emphasising that trust is never just a property of technology.
Trust emerges from the interplay of:
The values a system is built upon
The processes through which it operates
The outcomes it produces in the real world.
A helpful distinction emerged between reliability (the system works as intended) and trustworthiness (the system aligns with societal expectations and norms). Trustworthiness was described as a ‘meta-value’ that rests on recognised societal values rather than technical specifications.
A framework surfaced repeatedly during discussions, a triangle of:
Framing - Defining what trust means in context
Measurement - Identifying indicators that genuinely reflect trustworthiness
Interventions - Designing mechanisms that reinforce desired behaviours.
When these three elements align, trust can become operational rather than aspirational. When they do not, governance risks becoming symbolic rather than effective.
Closing Reflections
The Summit reinforced a truth that is easy to overlook amid rapid technological change, illuminating that ethical AI governance is not about taming technology, but rather It is about steering human institutions, aligning incentives with values and ensuring that innovation does not outpace our capacity for responsibility.
The energy and commitment at the event was inspiring. More importantly, however, they were grounded in realism. No one claimed that the path forward will be easy. What was claimed however, and what I left believing, is that shaping a trustworthy AI future is both possible and necessary, provided we are willing to confront our own assumptions and build governance frameworks that reflect the societies we hope to become.