Artificial intelligence is a general-purpose technology. Many people have relied on historical analogies, like electricity, to explain what that means. So, essentially, it’s a kind of a tool that we can layer lots of different functions on top of. So, like, electricity enabled us to see indoors, it allowed us to stay up later, it allowed us to become more efficient in manufacturing— the analogy is used to describe what artificial intelligence might provide us going forward.
And whether or not it’s the next electricity — in fact, some might suggest it’s more akin to something like a natural disaster, an earthquake or a fire, which we can’t control — others think that turning on and off AI is an important capability for governance going forward.
So, what do policy makers need to do, and how can they prepare to make sure that we allow our engineers and experts to innovate, but also protect our citizens, their privacy and their safety? There are four steps I’m recommending that policy makers consider going forward. First, develop a trusted network of experts who can help identify new developments in AI technologies. Number two, create strategies flexible enough to accommodate those shifts when, for example, computing power or algorithmic quality improves. Number three, identify specific AI applications in most need of governance frameworks. And four, make sure international governance frameworks are consistent with the national regulations that you’re putting together.
And so, if we start now, developing standards for how we want these technologies to develop, we’ll be ready, and sure that we’ve done all we can, to make sure that our children and grandchildren are using technologies that we want them to be using and that are good for our countries and the world.