Artificial intelligence (AI) is expanding at a rapid rate and its real-world application is only growing. Currently, over 50,000 people in Britain work in the field and the industry contributed £3.7bn to the economy last year.
As the tech continues its meteoric rise, questions have been raised surrounding the potential future risks it could pose to people’s privacy, human rights and safety. By way of regulating AI to ensure innovation is driven in a responsible manner and public trust in the technology is built, the UK Government has launched a white paper to guide use of AI in the UK.
The announcement comes as a group of more than 1,000 AI experts including Apple co-founder Steve Wozniak and Twitter CEO Elon Musk, have called for a global ‘six month’ pause in AI development.
Five principles
Five principles - safety, security and robustness, transparency and explainability, fairness, accountability and governance, and contestability and redress - have been adopted and are the pillars on which the new white paper stands.
Of particular relevance to performance marketers are two principles: transparency and explainability, and fairness. Transparency and explainability outlines the requirement for organisations developing and deploying AI to be able to communicate when and how it is used and clearly explain a system’s decision making-process and any risks posed by the tech. Fairness dictates that AI should be used in compliance with existing UK laws, like GDPR, and must not discriminate against individuals or create unfair commercial outcomes.
Separation of regulatory power
The government insists its approach to regulating AI will make it easier for businesses to innovate and create jobs, without putting the British public in jeopardy.
Viewed as an alternative to heavy-handed legislation that could stifle growth, the government is taking an adaptable approach to regulation and opting not to give total control to one single entity.
Instead, regulating duties will be split among existing regulators - such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority - to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.
Michelle Donelan, Science, Innovation and Technology Secretary, said: “AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is such that we need to have rules to make sure it is developed safely.”
Industry input
To support innovators bringing new ideas to market without being blocked by rulebook barriers, the government has committed £2m to fund a new trial environment where businesses can test how regulation could be applied to their AI products and services.
People working within AI can also share their views on the white paper as part of a new consultation launched earlier this week to inform development of the framework in the coming months.
Grazia Vittadini, Chief Technology Officer at Rolls-Royce, said: “Both our business and our customers will benefit from agile, context-driven AI regulation. It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility and trust that society demands from AI developers.”
Clare Barclay, CEO of Microsoft UK, added: “AI is the technology that will define the coming decades with the potential to supercharge economies, create new industries and amplify human ingenuity.
“If the UK is to succeed and lead in the age of intelligence, then it is critical to create an environment that fosters innovation, whilst ensuring an ethical and responsible approach. We welcome the UK’s commitment to being at the forefront of progress.”
AI experts call for ‘pause’ in AI training
The white paper’s launch follows a group of AI experts calling for a six month halt in the training of powerful AI systems due to the potential risks to society and humanity.
Issued by the non-profit Future of Life Institute, the open letter was signed by more than 1,000 people, including Apple co-founder Steve Wozniak, Twitter CEO Elon Musk, Yoshua Benigo, often referred to as one of the "godfathers of AI", and Stuart Russell, a pioneer of research in the field, as well as researchers at Alphabet-owned DeepMind.
The missive warned of potential risks to society and civilisation by human-competitive AI systems in the form of economic and political disruptions.
"AI systems with human-competitive intelligence can pose profound risks to society and humanity," the letter states.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
If such a pause cannot be enacted quickly, the letter says governments should step in and institute a moratorium.