Penalized Likelihood for Dyadic Network Formation Models with Degree Heterogeneity
Zizhong Yan, Jingrong Li, Yi Zhang
TLDR
This paper introduces a penalized likelihood method to robustly estimate dyadic network formation models, addressing existence and bias issues from degree heterogeneity.
Key contributions
- Proposes a penalized likelihood method for dyadic network formation models.
- Solves MLE existence issues and incidental-parameter bias in degree-heterogeneous networks.
- Guarantees finite-sample existence and provides bias corrections for coefficients.
- Establishes asymptotic results accommodating degree sparsity without fixed-effects compactness.
Why it matters
This paper provides a robust solution to long-standing estimation problems in network formation models, especially with degree heterogeneity. It ensures reliable parameter estimation and avoids selection bias where traditional methods fail, improving the accuracy of insights from complex network data.
Original Abstract
Estimating network formation models with degree heterogeneity raises two problems in empirical networks. First, agents that send no links, receive no links, or link to all remaining agents can make the fixed-effects MLE fail to exist. Trimming these agents changes the estimation sample and induces selection bias. Second, the incidental-parameter problem biases common parameters and average partial effects. We resolve both issues through a penalized likelihood approach. Our leading specification is a directed network model with reciprocity, nesting the standard undirected and non-reciprocal directed models. The penalty guarantees finite-sample existence and yields bias corrections for coefficients and partial effects. We establish asymptotic results without imposing compactness on the fixed-effects. Allowing the fixed effects to diverge at a logarithmic rate, our asymptotic framework accommodates the degree sparsity ubiquitous in large empirical networks. A global trade application demonstrates that our estimator avoids selection bias and recovers robust parameters where conventional methods fail.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.