ArXiv TLDR

Bureaucratic Silences: What the Canadian AI Register Reveals, Omits, and Obscures

🐦 Tweet
2604.15514

Dipto Das, Christelle Tessono, Syed Ishtiaque Ahmed, Shion Guha

cs.AIcs.CYcs.HC

TLDR

Canada's AI Register, meant for transparency, actually obscures human discretion and frames AI as "reliable tooling," hindering true accountability.

Key contributions

  • Argues AI registers are instruments of ontological design, not neutral mirrors of government activity.
  • Analyzed 409 systems in Canada's Federal AI Register using the ADMAPS framework.
  • Revealed 86% of systems are internal, yet human discretion and training are systematically obscured.
  • Shows the Register frames AI as "reliable tooling" rather than "contestable decision-making."

Why it matters

This paper critically examines government AI transparency, showing how registers can obscure crucial human elements and frame AI as mere tools. It's vital for understanding how current approaches risk automating accountability into performative compliance, hindering true contestability.

Original Abstract

In November 2025, the Government of Canada operationalized its commitment to transparency by releasing its first Federal AI Register. In this paper, we argue that such registers are not neutral mirrors of government activity, but active instruments of ontological design that configure the boundaries of accountability. We analyzed the Register's complete dataset of 409 systems using the Algorithmic Decision-Making Adapted for the Public Sector (ADMAPS) framework, combining quantitative mapping with deductive qualitative coding. Our findings reveal a sharp divergence between the rhetoric of "sovereign AI" and the reality of bureaucratic practice: while 86\% of systems are deployed internally for efficiency, the Register systematically obscures the human discretion, training, and uncertainty management required to operate them. By privileging technical descriptions over sociotechnical context, the Register constructs an ontology of AI as "reliable tooling" rather than "contestable decision-making." We conclude that without a shift in design, such transparency artifacts risk automating accountability into a performative compliance exercise, offering visibility without contestability.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.