One often hears about tight loops in the context of programming. I never really got what it meant. Until today.
A tight loop may be described as a loop that does very little work itself but has an impact on the performance of the larger program. For example, in an O(n) loop the faster it can execute each iteration the tighter it is.
Quoting the original text from where I learned this defintion
When an algorithm is given a Big O designation such as O(n log n), it tells you how the algorithm will scale; that is, how well will it perform as you increase the size of n to a large number. Let's say I have an algorithm that is O(n) over a data set. That means that there's a loop in the code somewhere that executes n times, once for each element in the data set. How long will it take to execute? Assuming that each iteration of the loop takes about the same amount of time, it will take m * n Where m is the amount of time it takes to process one element of the data set. It is clear that it will take longer to process the data set if m is higher. Big O doesn't address that time at all; it only states that (in this case), the algorithm will scale linearly as a function of the data set size. So a "tight loop" is simply one that has a smaller m. This is the "hidden constant factor" that the book talks about. Reducing m improves efficiency because any improvement in m will be multiplied over the whole data set.
Another must read is What is a "tight loop"?.