AI May Be a Powerful Tool, but It’s No Substitute for Cyber Experts

Balancing AI-driven automation and human oversight is crucial.

September 19, 2024
virusalert
AI systems will inevitably become targets for malicious actors, the author argues. (Photo illustration/REUTERS)

This is the year for artificial intelligence (AI) integration — it’s now embedded in nearly everything we do. AI systems increasingly control our environment through Internet of Things (IoT) devices and beyond. IoT devices connect our physical environments to the digital world, and more than 15 billion such devices are now connected worldwide. That number is expected to double by 2030.

We are also interacting ever more naturally with AI through chatbots.

As with any emerging technology, it started small: summarizing emails and writing limited responses, arguing with customer service chatbots for service changes and refunds, and asking bots for travel recommendations. However, soon AI will extend to every aspect of our work and homes.

And there’s much more in store. As the technology advances, its ability to predict and protect against new cyberthreats will be vital for safeguarding and maintaining trust in our interconnected world. AI also promises to improve communication and training within the cyber industry, simplifying complex technical concepts and making them more accessible to a wider audience.

That’s the good news. Now for the bad news.

AI systems will inevitably become targets for malicious actors. This is especially relevant to the next generation of systems, which have been shown to act in unexpected ways, such as exposing private and sensitive information.

AI-driven security solutions also risk producing many false alarms, leading to unnecessary alerts that burden information security personnel or overlook genuine threats.

As AI evolves, it can potentially eliminate human technical expertise in cybersecurity. Yet balancing AI-driven automation and human oversight is crucial — an essential part of any robust cyber operation.

The next significant boost in the AI revolution will happen when these systems, which are relatively isolated, group together in a larger intelligence: a vast network of power generation and consumption with each building just a node, like an ant colony or a human army.

Future industrial-control systems will include traditional factory robots and AI systems to schedule their operation. They will automatically order supplies and coordinate final product shipping. They will call on humans to repair individual subsystems or do things that are too specialized for robots when needed.

But our newest robots will be very different from previous models. Their sensors and actuators will be distributed in the environment, and their processing will be dispersed. They’ll be a network that become robots only in the aggregate.

This will turn our notion of security on its head. If massive decentralized AIs run everything, then who controls those AIs matters a lot.

It’s as if all the executive assistants or lawyers in an industry worked for the same agency; an AI that is both trusted and trustworthy will become a critical requirement.

This future requires us to see ourselves less as individuals and more as parts of larger systems. It’s AI as nature, as Gaia — everything is one system. It’s a future more aligned with the Buddhist philosophy of interconnectedness than Western ideas of individuality. (It also aligns with science-fiction dystopias, like Skynet from the Terminator movies.)

It will require rethinking many of our assumptions about governance and the economy. That won’t happen soon, but in 2024, we will likely see the first steps along that path.

That’s why the European Union’s passing of the Artificial Intelligence Act in March of this year couldn’t have come at a better time. This legislation bans high-risk AI applications, such as certain biometric and facial recognition systems, social scoring mechanisms, and AI designed for manipulation or exploitation. It imposes strict rules on high-risk AI systems in critical domains, such as infrastructure, education and employment, requiring risk assessment, transparency and human oversight.

Despite criticism from the industry for potentially hampering innovation and competitiveness, and from advocacy groups for not fully addressing ethical concerns, the phased implementation aims to balance regulation with practicality.

As innovators continue to propel the evolution of AI, many countries will remain significant contributors. But this evolution must be informed by policy makers and law makers who truly understand the potential benefits, but also the very considerable risks. 

This piece first appeared in the Toronto Star.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Jeff Schwartzentruber is a senior machine learning scientist at eSentire, and a senior advisor to Rogers Cybersecure Catalyst at Toronto Metropolitan University, where he focuses on the intersection of AI and cybersecurity.