Tuesday, November 06th, 2012 | Author: Konrad Voelkel
In this short posting, I want to give some intuitive idea on homotopy limits. Homotopy (co)limits appear whenever one has a notion of homotopy equivalence or weak equivalence between objects and one doesn't want to have constructions that distinguish between equivalent objects. The most prominent settings are, of course, classical homotopy theory and homological algebra. Although not necessary for the definition of homotopy (co)limits, I also talk about model categories.
First, let us recall what a limit is:
Given a small category I (let's think of a diagram like ) and a functor (let's think of a diagram in that has the prescribed shape), we look at all cones over . A cone over is just an object of together with morphisms, one for each object of , from to , such that these morphisms commute with the morphisms in the image of . This thing is called cone because we can imagine the diagram to be planar (on a blackboard) and the object hovers above it, the morphisms down to the diagram look like a cone. It is clear what a morphism between cones should be: a morphism of the objects that commutes with the morphisms down to the diagram. This yields a category of cones over , conveniently called . We call a terminal object in a limit of . It is important to notice that the morphisms down to the diagram in are part of the limit, not only the object itself. So, to state it briefly, limits are terminal cones.
Homotopy limits are quite similar. They are terminal homotopy cones. Let's see what that means. I will just tell you now what it doesn't mean: homotopy limits are not just limits-up-to-weak equivalence. Homotopy limits are also not just limits computed in the homotopy category. But: Homotopy limits are only well-defined up to weak equivalence (unless you specify to use a particular computational recipe). Technically, that means there is not a unique holim-functor, but we can safely ignore that for a first approximation.
We first suppose that we have a category equipped with a lluf subcategory , where lluf just means all objects of are also in , we just may have fewer morphisms. We call the morphisms in the weak equivalences of . We define , the homotopy category is the localization along the weak equivalences.
Homotopy limits solve the following problem: Suppose I have a diagram in but I don't care about replacing the objects by weakly equivalent ones, then what is the terminal cone (up to weak equivalence) not depending on these choices? The difference to a limit in the homotopy category is that we look at honest maps, not morphisms in the homotopy category.
So, now we have a vague idea about homotopy limits. How to compute them? That is where model categories appear in the story. In principle, model categories are not necessary to define homotopy limits, and homotopy limits don't depend on a model structure - they depend only on the class of weak equivalences. But model categories allow to compute certain homotopy limits.
The default approach: Suppose we have a category with a class of weak equivalences as before. Now construct a model category around. Then suppose the index category for the diagram you want to compute a homotopy limit of is a Reedy category, which means you can assign a degree to each object and all morphisms can be uniquely factorized in one that lowers degree and one that raises degree (a technical condition, satisfied by most small interesting diagrams). Then there is a convenient model structure on the diagram category and we can define the homotopy limit as the fibrant replacement in followed by the ordinary limit. This is a classical derived-functor definition.
The crux is: How does the fibrant replacement look like? This question is as hard as any other "how does the injective resolution look like"-type question. Only in particular cases one can really compute a homotopy limit, like for pullbacks.
If you have a diagram , it turns out that you can compute the homotopy pullback (in any proper simplicial model category, like topological spaces) by replacing one of the maps by a fibration (that means, replacing with a weakly equivalent such that factors over by the weak equivalence and a fibration ) and then computing the ordinary pullback.
I would recommend to read in Dugger's exposition of homotopy colimits to learn more. There is some nice geometric intuition possible for homotopy colimits, which you shouldn't miss!