Responsible Innovation#

In both academia and industry, we can see the growing gap between our ability to innovate and our capacity to foresee and manage its consequences. The current scale and pace with growing organisational needs makes the responsibility management process much more challenging. In this sense, technologies with uncertain impacts or lacking historical precedents requires a through analysis to managing risks responsibly.

RI Framework#

Organisations can utilise Owen et al.’s RI framework (2013) to integrate four essential characteristics into their innovation pipeline [Owen et al., 2013]:

  • Reflective: The first step is to define the desired outcomes of innovation. This requires reflection on the values that should guide innovation and consideration of the potential benefits and burdens it might create. However, there may be tensions and conflicts among these targets, and the prioritization or exclusion of certain goals requires political and ethical discussion.

  • Anticipatory: Once the “right impacts” have been identified, it’s crucial to anticipate both intended and unintended consequences of innovation. This involves using methodologies like foresight, technology assessment, and scenario development to explore various pathways and potential outcomes. By asking “what if” questions, scientists and innovators can identify potential risks and opportunities early on.

  • Deliberative: Open and inclusive dialogue is vital for deliberate delivery of innovation. This involves inviting perspectives from publics and diverse stakeholders to reframe issues, identify potential areas of contestation, and ensure a democratic and equitable approach. We should also note that responsible innovation is a collective endeavor that requires collaboration and shared responsibility across the entire innovation ecosystem. Scientists, innovators, businesses, research funders, policymakers, and users all have a role to play in ensuring the ethical and socially responsible development of new technologies

  • Responsive: The overall management and governance process should enable responsiveness to new information, changing perspectives, and evolving social values. This requires the integration of effective mechanisms for participatory and anticipatory governance that allow for the modulation of innovation trajectories as new knowledge emerges.

Throughout this cookbook, we emphasise heavily on developing and maintaining proactive approaches. And these four characteristics of responsible innovation is core to build a proactive solution in a system design. A proactive approach enables the “by-design” approach and prevents most of the concerns at the start of development process. These dimensions encourage a shift from simply managing the products of innovation to proactively engaging with its purposes and underlying motivations, with a particular emphasis on inclusive and democratic governance.

Public Attitudes to AI#

The recent trend in investment on AI-powered services has shown that investors are increasingly building their strategies around the AI capabilities of businesses. In this competitive environment, companies are prioritising the rapid development of new AI functions with a primary focus on accuracy, often at the expense of inclusivity. Consequently, the public is likely to encounter more AI-powered tools that may lack inclusive and fair practices. Therefore, it is crucial to understand both perspectives: (1) public attitudes to AI and (2) the efforts businesses are making to develop AI capabilities. This understanding can guide companies to create services that are more equitable, inclusive, and fair.

Public Attitudes to AI was also the name of CDEI’s (now it is called RTAU) 2024 report [Centre for Data Ethics and Innovation and Department for Science, Innovation & Technology, 2024]. Understanding public opinion on AI is crucial when developing AI-enabled products responsibly and building “trust” around them. It can help us answer:

  • How can society benefit from the use of data or AI equally?

  • How can we build the trust between organisations and citizens and ensure that organisations will be responsible for their actions?

  • How can public change their perspective when the overall view on AI is pessimistic?

  • How should we define the risk management strategies to help public to support the use of AI?

  • How can we design AI-enabled systems for the citizens with lower digital literacy?

These questions are in the intersection of anthropology, psychology, sociology and computer science. Human-computer interaction, and more specially, human-AI interaction researchers try to solve these challenges for years. Our current interaction with computing devices such as computers and mobile phones are based on this exhaustive research. Chapter Evaluating and Mitigating Fairness in Financial Services: A Human-AI Interaction Perspective shares a use case where we utilised Human-AI interaction guidelines and standards to evaluate some of existing financial service applications. The next chapter summarises AI evaluation techniques for a comprehensive auditing.

Participation Tools#

Increasing participation is one of the fundamental task to achieve responsible AI development, which requires a multifaceted approach involving identifying barriers to entry, promoting inclusivity, and providing avenues for engagement. Education and training programmes to diverse backgrounds, community engagement initiative to share knowledge and collaborate (e.g. The ODI), democratising the research initiatives and funding projects (e.g. Collective Intelligence Projec) is crucial.

In this document, we do not give a step-by-step guidelines for participation, but share two real-life initiatives that accelerated “democratic” AI rapidly:

Case Study: Cohere Aya

Case Study: pol.is

Useful Reading#

  1. UKRI: https://www.ukri.org/who-we-are/epsrc/our-policies-and-standards/framework-for-responsible-innovation/

  2. TAS Hub - RRI: http://doi.org/10.17639/nott.7353