Canada is a prominent destination for international students, whose proficiency in English-language tests does not always align with strict academic writing norms, requiring additional effort and personalized attention that universities often lack the resources to sufficiently provide. Large language models (LLMs) through generative artificial intelligence offer a unique opportunity to bridge linguistic gaps, assisting non-native English speakers in achieving academic writing fluency and linguistic proficiency at scale, while minimizing universities’ costs. Despite the potential benefits, ethical concerns arise regarding LLMs’ fair use in aiding students, particularly concerning plagiarism implications, the risk of overdependence and the reliability of AI detection tools. This paper explores how exactly LLMs assist international students, the ethical considerations of their usage — backed by recent studies and scholars’ insights — and suggests a university policy framework for responsible generative AI use in academic settings.