The Architectural Bottleneck Principle
Tiago Pimentel, University of Cambridge
Josef Valvoda, University of Cambridge
Niklas Stoehr, ETH Zurich
Ryan Cotterell, ETH Zurich
In this paper, we seek to measure how much information a component in a neural network could extract from the representations fed into it. Our work stands in contrast to prior probing work, most of which investigates how much information a model’s representations contain. This shift in perspective leads us to propose a new principle for probing, the architectural bottleneck principle: In order to estimate how much information a given component could extract, a probe should look exactly like the component. Relying on this principle, we estimate how much syntactic information is available to transformers through our attentional probe, a probe that ex- actly resembles a transformer’s self-attention head. Experimentally, we find that, in three models (BERT, ALBERT, and RoBERTa), a sentence’s syntax tree is mostly extractable by our probe, suggesting these models have ac- cess to syntactic information while composing their contextual representations. Whether this information is actually used by these models, however, remains an open question.
The Architectural Bottleneck Principle
Tiago Pimentel, University of Cambridge
Josef Valvoda, University of Cambridge
Niklas Stoehr, ETH Zurich
Ryan Cotterell, ETH Zurich
In this paper, we seek to measure how much information a component in a neural network could extract from the representations fed into it. Our work stands in contrast to prior probing work, most of which investigates how much information a model’s representations contain. This shift in perspective leads us to propose a new principle for probing, the architectural bottleneck principle: In order to estimate how much information a given component could extract, a probe should look exactly like the component. Relying on this principle, we estimate how much syntactic information is available to transformers through our attentional probe, a probe that ex- actly resembles a transformer’s self-attention head. Experimentally, we find that, in three models (BERT, ALBERT, and RoBERTa), a sentence’s syntax tree is mostly extractable by our probe, suggesting these models have ac- cess to syntactic information while composing their contextual representations. Whether this information is actually used by these models, however, remains an open question.