Ask most scientists what the biggest research breakthrough of 2020 was and it’s far from guaranteed that the development of a vaccine for coronavirus will be top of their list. They are just as likely to point to DeepMind’s use of AI to solve the protein folding problem (which, in short, is the ability to predict the shape of a protein from just looking at its constituent parts).
A vastly improved understanding of the shape of the building blocks of life has the potential to dramatically accelerate the process of drug development. It may have come too late to have an impact on this pandemic, but it will make it a lot easier to treat the next one.
That’s far from the only application either. It could also be used to tackle climate change by sucking carbon out of the sky or creating enzymes that break down plastic waste in the sea.
It’s a stark example of the power and potential of AI. A problem that has plagued scientists for more than half a century solved by a machine learning algorithm. Think about what other problems AI can solve and what that could mean for our health, the economy, and the planet.
London-based DeepMind’s success is the latest in a long line of British computing breakthroughs, from Babbage and Lovelace through Turing to Berners-Lee and now to Hassabis.
But we shouldn’t rest on our laurels. The rest of the world isn’t. By the latest count, 54 countries across the globe are either in the process of developing or have already published a national AI strategy.
And there’s evidence that, despite some notable successes, the UK is falling behind. We’ve gone from 5th to 10th on a Stanford University ranking, which looks at patenting, investment and skills.
To its credit, this government has the right ambitions on AI. Last week it announced a new national AI strategy to be published in the autumn. Its aim will be to “make the UK a global centre for the development, commercialisation and adoption of responsible AI.”
But as of yet there’s been little in the way of the concrete policy changes necessary to deliver it. A new report released today by Seb Krier, the former head of regulation at the Office for AI, aims to change that. It sets out bold reforms that would make the government’s ambition a reality.
Talent will be key. In practice, this will mean looking at immigration policy. Investing in the domestic STEM and AI skills pipeline is smart, but it’ll take years to pay off. By contrast, immigration has an immediate impact.
But it’s not enough to open the door. We should be doing everything we can to draw talent in. The historian Anton Howes points out that this was how we used to do it. In the 1800s we were so concerned about staying at the forefront of engineering that when Marc Isambard Brunel was jailed for unpaid debts, the government of the day paid them off and let him go, provided he stayed in Britain. If they hadn’t, his son Isambard Kingdom would probably have gone on to be Russia’s greatest engineer instead.
Now, I’m not suggesting we start turning a blind eye to the indiscretions of foreign AI researchers, but we can do more to draw in the best and brightest from elsewhere. Funding more postgraduate scholarships for overseas students in AI related fields would be a good start.
Cutting edge AI projects are highly demanding on computing power. This creates a barrier to entry and a barrier to non-commercial research. Many top researchers are drawn out of academia. This is a problem, not just because there are many worthwhile non-commercial research projects out there, but also because it means it is less likely that highly specialised knowledge gets passed on to students. One study found that when AI experts leave academia the number of student-founded start-ups declines too.
One way to avoid an AI brain drain away from academia is to create a national research cloud. Researchers and students would gain access to the computational power necessary to develop AI applications and scrutinise commercial models in the form of cloud computing credits and publicly-available AI-ready data sets.
Data can be another barrier to entry. For example, DeepMind used 170,000 known proteins to train their algorithms. As a result, tech giants such as Google and Facebook have a built-in advantage in AI. Reforming copyright laws by creating an exemption for text-and-data mining would be one way of levelling the playing field for start-ups.
AI has massive potential to make our lives better, but many of its applications will rely on public trust and support. That’s a problem because more than half of the British public have no faith in any organisation to use algorithms when making judgements about them. Scandals such as A-Level grading show the public’s scepticism isn’t entirely irrational.
To build trust and prevent future scandals, the government should make the Centre for Data Ethics and Innovation fully independent. This would allow it to scrutinise the public sector’s use of algorithms to ensure it is both technically and ethically sound.
AI is set to transform the global economy over the next decade. It is a revolution that the UK is well-placed to profit from but, unless the government can match rhetoric with concrete policies, we risk falling behind.