In an effort to replicate the human brain, artificial intelligence (AI) underwent a number of developmental phases — including machine learning — since the 1950s. This raises the following questions: To what degree should AI be granted autonomy in order to take advantage of its power and precision, and to what degree should it remain subordinate to human supervision?
It could be argued that removing human control could temper the most distasteful elements of warfare and enable conflict to be conducted in a more ethical manner. Conversely, the dehumanization of conflict could reduce the threshold for war, causing armed conflicts to degenerate endlessly.
Because of the uncertainty around the impacts of AI, Robert Mazzolin argues that there is an urgent need to clarify the ethical issues involved before technological developments outpace society. International governance bodies are best suited to develop regulatory frameworks, and they must seek a governance strategy that is both precautionary and anticipatory.