Article directory
introduction
Continuing from the previous article, we continue to study the second part of Chapter Two, the second part of one-dimensional random variables and their distribution.
3. Common random variables and their distributions
3.1 Common discrete random variables and their distribution laws
(a) (0-1) distribution
Let random variable XXThe possible value of X is 0 or 1, and its probability isPPP { X = 1 X=1 X=1 } = p , =p, =p, P P P { X = 0 X=0 X=0 } = 1 − p ( 0 < p < 1 =1-p(0 < p < 1 =1−p(0<p<1 , calledXXX obeys (0-1) distribution, denoted asX ∼ B ( 1 , p ) . X \sim B(1,p).X∼B(1,p).
(2) Binomial distribution
Let random variable XXThe distribution law of X isPPP { X = k X=k X=k } = C n k p k ( 1 − p ) n − k =C_n^kp^k(1-p)^{n-k} =Cnkpk(1−p)n−k ,其中 k = 0 , 1 , 2 , … , n , 0 < p < 1 , k=0,1,2,\dots,n,0 < p < 1, k=0,1,2,…,n,0<p<1 , called random variableXXX obeys the binomial distribution, denoted asX ∼ B ( n , p ) . X \sim B(n,p).X∼B(n,p).
Recall the Bernoulli probability in Chapter 1, which is also a binomial distribution.
(3) Poisson distribution
Let the discrete random variable XXLet X set P { X = k } = λ kk ! and − λ , P\{X=k\}=\frac{\lambda^k}{k!}e^{-\lambda},P{ X=k}=k!lke− λ ,among which,λ > 0 , k = 0 , 1 , 2 , … , n , \lambda > 0,k=0,1,2,\dots,n,l>0,k=0,1,2,…,n , called random variableXXX obeys the parameterλ \lambdaThe Poisson distribution of λ , denoted asX ∼ P ( λ ) . X \sim P(\lambda).X∼P ( λ ) .
(4) Geometric distribution
Let the discrete random variable XXThe distribution law of X is P { X = k } = p ( 1 − p ) k − 1 , P\{X=k\}=p(1-p)^{k-1},P{ X=k}=p(1−p)k − 1 ,where,k = 1 , 2 , … , n , k=1,2,\dots,n,k=1,2,…,n , called random variableXXX obeys the geometric distribution, denoted asX ∼ G ( p ) . X \sim G(p).X∼G(p).
Random variable XX subject to geometric distributionX can be understood in this way: Suppose there are only two resultsA in the Bernoulli experiment, A ‾ , P ( A ) = p A,\overline{A},P(A)=pA,A,P(A)=p , thenXXX means AAin Bernoulli trialThe number of trials when A first occurs.
For example,X = 2 X=2X=2 , it means that the first test happened after two tests, that is, the first test did not happen, but the second test happened;X = n X=nX=n , means the firstn − 1 n-1n−1 trial did not occur,nnthn trials occur. This is easy to understand, and the formula can be remembered at once.
(5) Hypergeometric distribution
Let the discrete random variable XXThe distribution law of X is P { X = k } = CM k ⋅ CN − M n − k CN n , P\{X=k\}=\frac{C_M^k \cdot C_{NM}^{nk}} {C_N^n},P{ X=k}=CNnCMk⋅CN−Mn−k, of whichM , N , k , n M,N,k,nM,N,k,n is a natural number, andM ≤ N , max { N − M , 0 } ≤ k ≤ min { M , n } , n ≤ NM \leq N,max\{NM,0\} \leq k \leq min\{ M,n\},n \leq NM≤N,max{ N−M,0}≤k≤min{ M,n},n≤N , called random variableXXX obeys the hypergeometric distribution, denoted asX ∼ H ( n , M , N ) . X \sim H(n,M,N).X∼H(n,M,N).
3.2 Common continuous random variables and their probability densities
(1) Uniform distribution
Let random variable XXThe probability density of X is f ( x ) = { 1 b − a , a ≤ x ≤ b 0 , else , f(x) = \begin{cases} \frac{1}{ba}, & a \leq x \ leq b \\ 0, & else \\ \end{cases},f(x)={ b−a1,0,a≤x≤belse, called random variableXXX is in the interval( a , b ) (a,b)(a,b ) The interior obeys a uniform distribution, which is denoted asX ∼ U ( a , b ) . X \sim U(a,b).X∼U(a,b).
If the random variable X ∼ U ( a , b ) X \sim U(a,b)X∼U(a,b ) , then its distribution function is F ( x ) = { 0 , x < ax − ab − a , a ≤ x ≤ b 1 , x ≥ b F(x)=\begin{cases} 0, & x < a \\ \frac{xa}{ba}, & a \leq x \leq b \\ 1,& x \geq b\\ \end{cases}F(x)=⎩ ⎨ ⎧0,b−ax−a,1,x<aa≤x≤bx≥b
(2) Exponential distribution
Let random variable XXThe probability density of X is f ( x ) = { λ e − λ xx > 0 0 , x ≤ 0 ( λ > 0 ) f(x) = \begin{cases} \lambda e^{-\lambda x} & x > 0 \\ 0, & x \leq 0 \\ \end{cases}(\lambda > 0)f(x)={ λe−λx0,x>0x≤0( l>0 ) called random variableXXX obeys the parameterλ \lambdaThe exponential distribution of λ , denoted asX ∼ E ( λ ) . X \sim E(\lambda).X∼E ( λ ) .
If the random variable X ∼ E ( λ ) X \sim E(\lambda)X∼E ( λ ) , then its distribution function is F ( x ) = { 1 − e − λ x , x ≥ 0 0 , x < 0 F(x)=\begin{cases} 1-e^{-\lambda x }, & x \geq 0 \\ 0, & x < 0\\ \end{cases}F(x)={ 1−e−λx,0,x≥0x<0
(3) Normal distribution
Let random variable XXThe X -variance function f ( x ) = 1 2 π σ e − ( x − µ ) 2 2 σ 2 ( − ∞ < x < + ∞ ) , f(x) = \frac{1}{\sqrt{2 \pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}(-\infty < x < +\infty),f(x)=2 p.mp1e−2 p2( x − μ )2(−∞<x<+ ∞ ) , called random variableXXX obeys normal distribution, denoted asX ∼ N ( μ , σ 2 ) X \sim N(\mu,\sigma^2)X∼N ( μ ,p2 ), its probability density function is shown in the figure below:
especially, ifμ = 0 , σ = 1 \mu =0,\sigma=1m=0,p=1 , called random variableXXX obeys the standard normal distribution, denoted asX ∼ N ( 0 , 1 ) X \sim N(0,1)X∼N(0,1 ) , its probability density is φ ( x ) = 1 2 π e − x 2 2 ( − ∞ < x < + ∞ ) , \varphi(x)= \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}}(-\infty < x < +\infty),φ ( x )=2 p.m1e−2x2(−∞<x<+ ∞ ) , its probability density function is shown in the figure below:
the distribution function isΦ ( x ) = ∫ − ∞ x φ ( t ) dt . \varPhi(x)=\int_{-\infty}^x\varphi(t )dt.Φ ( x )=∫−∞xφ ( t ) d t . The normal distribution has the following properties:
(1) If X ∼ N ( 0 , 1 ) X \sim N(0,1)X∼N(0,1 ) , then its probability density functionφ ( x ) \varphi(x)φ ( x ) is an even function, and P { X ≤ 0 } = Φ ( 0 ) = 0.5 , P\{X \leq 0 \}=\varPhi(0)=0.5,P{ X≤0}=Φ ( 0 )=0.5 , P { X ≤ − a } = Φ ( − a ) = P { X > a } = 1 − Φ ( a ) P\{X \leq-a\}=\varPhi(-a)=P\{X > a\}=1-\varPhi(a).P{ X≤−a}=Φ ( − a )=P{ X>a}=1−Φ ( a ) . (2) If the random variableX ∼ N ( μ , σ 2 ) X \sim N(\mu,\sigma^2)X∼N ( μ ,p2 ),则P { X ≤ μ } = P { X > μ } = 0.5 , P\{X \leq \mu\}=P\{X > \mu\}=0.5,P{ X≤m }=P{ X>m }=0.5 , that is, the image of the density function of the normal distribution is aboutx = μ x=\mux=μ symmetry.
(3) If the random variable X ∼ N ( μ , σ 2 ) X \sim N(\mu,\sigma^2)X∼N ( μ ,p2 ), 则X − μ σ ∼ N ( 0 , 1 ) \frac{X-\mu}{\sigma}\sim N(0.1).pX − m∼N(0,1).
(4) If the random variable X ∼ N ( μ , σ 2 ) X \sim N(\mu,\sigma^2)X∼N ( μ ,p2 ),刪P { a < X ≤ b } = F ( b ) − F ( a ) = Φ ( b − μ σ ) − Φ ( a − μ σ ). P\{a < X \leq b\}=F(b)-F(a)=\varPhi(\frac{b-\mu}{\sigma})-\varPhi(\frac{a-\mu} {\sigma}).P{ a<X≤b}=F(b)−F(a)=F (pb−m)−F (pa−m). (5) Φ ( a ) + Φ ( b ) = { < 1 , a + b < 0 = 1 , a + b = 0 > 1 , a + b > 0 \varPhi(a)+\varPhi(b)=\begin{cases} <1, & a+b< 0 \\ =1,& a+b= 0\\ \ >1 ,& a+b> 0\\ \end{cases} Φ ( a )+Φ ( b )=⎩ ⎨ ⎧<1,=1, >1,a+b<0a+b=0a+b>0
4. Distribution of random variable function
Let XXX is a random variable whose distribution is known, saidY = φ ( X ) Y=\varphi(X)Y=φ ( X ) is a random variableXXfunction of X , study random variableYYThe distribution of Y and the distribution of a function of a random variable.
(1) Distribution of discrete random variable functions
Let XXX is a random variable,Y = φ ( X ) Y=\varphi(X)Y=φ ( X ) , as long as according toXXThe possible value and probability of X to find YYThe possible values and probabilities of Y can be obtained as YYThe distribution law of Y.
(2) Distribution of continuous random variable function
Let XXX is a continuous random variable whose probability density isf ( x ) f(x)f ( x ),又Y = φ ( x ) Y=\varphi(x)Y=φ ( x ) , find the random variableYYFor the distribution of Y , findYYThe Y function P { Y ≤ y } = P { φ ( X ) ≤ y } , P\{Y \leq y\}=P\{\varphi(X) \leq y\},P { AND≤y}=P { φ ( X )≤y } , then passXXThe distribution of X findsYYThe distribution of Y.