This article uses three-wave survey data, which appears to mean survey panels that are reinterviewed twice at certain time intervals, to attempt to measure the stability of political attitudes and, in an innovation over prior research, the stability of intensity of political attitudes. Frankly, the authors use some methods to analyze their data that are somewhat over my head, so I can't evaluate the methods they use. They are testing three different hypotheses: 1) that the youngest adults have the least stable political attitudes; 2) that attitude stability increases with age; and 3) that symbolic attitudes are the most likely to be stable over time, due to their early impression and high salience. This is a test of the prevailing conventional wisdom, which is why the article was publishable despite having negative findings. The authors learned, examining the data in the aggregate, that while the youngest cohort do exhibit the least stability, there's no statistical significance, so that the relationship cannot be distinguished from noise in the data; there's no relationship between age and attitude stability; and symbolic attitudes are not more stable than nonsymbolic attitudes. When they disaggregate the data by cohort, they find support for the idea that young adults are least stable and that stability increases with age. They also find that while the intensity of party identification decreases in stability with age, the actual party identification (which they call "direction," having operationalized party ID using a directional measure), stabilizes with age.
My take: This article seems to be pulling in two different directions at once, which makes me suspect statistical skullduggery. The fact that my quantitative chops are not strong enough yet to evaluate their methods makes me more suspicious. But assuming the methods are valid, I'm left wondering what's the story to explain these findings? Perhaps the aggregation of cohorts washed out the data, so that the relationship, which pulled in different directions depending on the cohort, was left looking random once the data was aggregated. This means that the authors have to justify their disaggregation (beyond "this piece isn't publishable without some sort of positive finding"). They justify it by a) arguing that party ID is the "prototypic symbolic attitude" and also because it's the only attitude that had the appropriate data for disaggregation. To the extent that party ID is actually different in quality from other political attitudes, I agree that treating it separately is appropriate. However, unless Sears, cited to extensively with little discussion, explains the justification for that, the authors have not made a sufficient case for their treatment.