While the discussion of the multiple-comparison problem in statistics on pp 441-42 in your book is mathematically correct, it is not quite the usual Bonferroni correction, and suggests (incorrectly) that the Bonferroni correction is limited to independent tests (which it is not).
Specifically, suppose that one does n tests T_i (1 le i le n) simultaneously, each with significance level alpha_1, and wants to know what is the appropriate alpha (or P-value) for at least one of the tests T_i to show a positive result. The usual Bonferroni correction (as you imply) is alpha_1=alpha/n or alpha=n*alpha_1.
However, the derivation that you give on pp441-42 requires the tests to be independent and fails if they are not independent. The usual derivation is as follows. In general for events E_1,E_2,...,E_n one has
(Boole's Law) P( (Union(1,n))E_i ) le Sum(1,n) P(E_i)
which follows from either looking at a Venn diagram or by writing P(E)=Exp(I_E), where I_E is the indicator or characteristic function of the event E. Then I_{Union(1,n)E_i} le Sum(1,n)I_{E_i} by looking at points in the probability space, which implies the above.
If E_i are the CRITICAL REGIONS of the tests T_i, so that P(E_i)=alpha_1, then the overall or ``experiment-wide'' critical region is where at least of the the tests T_i succeeds is E=Union(1,n)E_i, and Boole's law says that the overall alpha satisfies alpha le Sum(1,n)alpha_1(i)=n*alpha_1.
A historical note: Bonferroni rewrote Boole's law in terms of complementary events as
(Bonferroni's Law) P( Intersection(1,n)E_i ) ge Sum(1,n)P(E_i) - (n-1).
where now the E_i are the ACCEPTANCE REGIONS of the tests rather than the critical regions. Before Neyman etal formalized statistical testing, many people apparently thought in terms of acceptance regions rather than critical regions, and so were happier with Bonferroni's formulation. If P(E_i) ge alpha_1=alpha/n, then Bonferroni's law says P( Intersection(1,n)E_i ) ge 1-n*alpha_1.
In particular, the usual Bonferroni correction (as derived from either Boole's or Bonferroni's laws) does not require independence.
Incidentally, the Bonferroni correction (1-(1-alpha)^n) is FALSE for nonindependent tests. (Your discussion does not claim otherwise.) I thought that you might be interested in the counterexample in the Appendix below.
Did you take the discussion of the Bonferroni correction on p441-42 of your book from Jurg Ott's treatment in his book? He has a somewhat similar discussion, but is apparently more interested in sharpness than in getting a general upper bound. Incidentally, if n^2 max P(E_i(intersect)E_j) is much smaller than n alpha_1 (which it often will be in this case), then Boole's law is also sharp. The latter follows from ``exclusion-inclusion'' inequalities in probability.
Two useful references are
Bonferroni, C. E. (1936) Teoria statistica delle classi e calcolo delle probabilit `a. Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commerciali di Firenze 8, 3--62.
Miller, Rupert G. (1981) Simultaneous statistical inference . 2nd ed. Springer Verlag, pages 6-8
The second is a good standard reference for Bonferroni and Bonferroni-like corrections. The first is Bonferroni's relevant paper. You might be interested in including Bonferroni's paper in your LITERATURE CITED section at least for variety. I didn't seem many references there that were published in Italian in the 1930's.
I got these references from a Web site that someone maintains on Carlo Bonferroni: http://www.nottingham.ac.uk/~mhzmd/bonf.html
Stanley Sawyer Department of Mathematics Washington University St.Louis, MO 63130