Differentially Private Language Generation and Identification in the Limit
Anay Mehrotra, Grigoris Velegkas, Xifan Yu, Felix Zhou
TLDR
This paper studies differentially private language generation and identification in the limit, showing privacy's varied impact on learnability.
Key contributions
- For countable language collections, DP generation is qualitatively possible but requires significantly more samples.
- DP language identification faces fundamental barriers, failing for specific language pairs with infinite intersections.
- In stochastic settings, private identification is possible if and only if it's identifiable in the adversarial model.
Why it matters
This work establishes new theoretical understandings of differential privacy's impact on language generation and identification in the limit. It reveals distinct challenges and possibilities for privacy-preserving algorithms in these fundamental learning tasks, separating adversarial and stochastic settings.
Original Abstract
We initiate the study of language generation in the limit, a model recently introduced by Kleinberg and Mullainathan [KM24], under the constraint of differential privacy. We consider the continual release model, where a generator must eventually output a stream of valid strings while protecting the privacy of the entire input sequence. Our first main result is that for countable collections of languages, privacy comes at no qualitative cost: we provide an $\varepsilon$-differentially-private algorithm that generates in the limit from any countable collection. This stands in contrast to many learning settings where privacy renders learnability impossible. However, privacy does impose a quantitative cost: there are finite collections of size $k$ for which uniform private generation requires $Ω(k/\varepsilon)$ samples, whereas just one sample suffices non-privately. We then turn to the harder problem of language identification in the limit. Here, we show that privacy creates fundamental barriers. We prove that no $\varepsilon$-DP algorithm can identify a collection containing two languages with an infinite intersection and a finite set difference, a condition far stronger than the classical non-private characterization of identification. Next, we turn to the stochastic setting where the sample strings are sampled i.i.d. from a distribution (instead of being generated by an adversary). Here, we show that private identification is possible if and only if the collection is identifiable in the adversarial model. Together, our results establish new dimensions along which generation and identification differ and, for identification, a separation between adversarial and stochastic settings induced by privacy constraints.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.