Kellstedt & Whitten, Chapter 10
Multivariate regression is what's required to get your theory over the fourth "causal hurdle" - i.e., that there's no other variable that is actually causing the relationship you see between your IV and DV.
The proofs make it clear that this is necessary, because they demonstrate that the error terms for every independent variable include the variance explained by the other IVs, if you attempt to use the bivariate regression equation to derive your regression line. In short, your regression line is going to overdetermine the relationship if you fail to control for other independent variables. I don't think this chapter is talking about confounding variables yet - it's just talking about adding additional IVs to your model.
The only times when you can skip multivariate regression is when there is no variance between the additional IV and the DV, and when the two IVs are perfectly non-correlated. In the first case, the proposed IV of course has no effect; and in the second, while both IVs have an effect on the DV, none of your IV's effect is being masked by the other IV's effect.
The chapter also talks about measuring substantive significance, first by introducing standardized coefficients (which is just a way of converting your coefficients from actual quantities to pieces of a standard deviation; sort of like going from utils to dollars in your utility functions); then by basically arguing that there's no good way to know what constitutes substantive significance, except that without statistical significance, it doesn't exist. I feel like Potter Stewart.