Policy makers should inform, consult and involve citizens as part of their efforts to data-driven technologies such as artificial intelligence (AI). Although many users rely on AI systems, they do not understand how these systems use their data to make predictions and recommendations that can affect their daily lives. Over time, if they see their data being misused, users may learn to distrust both the systems and how policy makers regulate them. This paper examines whether officials informed and consulted their citizens as they developed a key aspect of AI policy — national AI strategies. Building on a data set of 68 countries and the European Union, the authors used qualitative methods to examine whether, how and when governments engaged with their citizens on their AI strategies and whether they were responsive to public comment, concluding that policy makers are missing an opportunity to build trust in AI by not using this process to involve a broader cross-section of their constituents.