This particular tail bound is a consequence of the Gaussian isoperimetric inequality: https://en.wikipedia.org/wiki/Gaussian_isoperimetric_inequality

]]>Thank you for writing this neat tour through these ideas. I have a very simple question that I am stumped by early in the post. That is, in example (1), I am having trouble seeing how to prove the Gaussian tail-like bound for the r-enlargement of A for all A. I see that the bound is plausible by Chernoff bounds for A = (-r,r), but I don’t see how to easily move to general A with P(A) >= 1/2. Is there an easy way to see this or does it require a more technical, perhaps geometric, analysis?

]]>Dear Salazar —

at least to me, there is no immediate intuitive route from (1) to concentration around the mean. You have to first spot the equivalence between (1) and concentration of Lipschitz functions around their medians (which is neither too obvious nor too complicated). Once you get there, there is one more step to take: the absolute difference between the mean and any median of a Lipschitz function can be uniformly bounded (in case of Gaussian-like concentration, by a quantity proportional to the Lipschitz constant). With these preliminaries out of the way, consider the sample mean. If the sample size is n, the sample mean is Lipschitz with constant 1/n.

]]>the peasant concentration-of-measure results I learned were, e.g., of LLN and large deviation-type. can you explain how this is directly related to the r-fattening of a set? i understand you will get to it in the later after (1). but intuitively how is (1) related to how a random variable concentrates around its mean?

]]>