Perplex
Content
  • Exponents & Logarithms
  • Approximations & Error
  • Sequences & Series
  • Matrices
  • Complex Numbers
  • Financial Mathematics
  • Cartesian plane & lines
  • Function Theory
  • Modelling
  • Transformations & asymptotes
  • 2D & 3D Geometry
  • Voronoi Diagrams
  • Trig equations & identities
  • Vectors
  • Graph Theory
  • Probability
  • Descriptive Statistics
  • Bivariate Statistics
  • Distributions & Random Variables
  • Inference & Hypotheses
  • Differentiation
  • Integration
  • Differential Equations
Other
  • Review Videos
  • Formula Booklet
  • Blog
  • Landing Page
  • Sign Up
  • Login
  • Perplex
    IB Math AIHL
    /
    Distributions & Random Variables
    /

    Sampling, combinations and CLT

    Edit

    Exercises

    Key Skills

    Sampling, combinations and CLT

    Sampling, combinations and CLT

    Understanding how the sums of random variables behave, and how sums of large samples of any random variable is roughly normally distributed.

    Want a deeper conceptual understanding? Try our interactive lesson!

    Exercises

    No exercises available for this concept.

    Practice exam-style sampling, combinations and clt problems

    Key Skills

    Linear Transformation of Random Variables
    AHL AI 4.14

    Suppose that the random variable ​X​ is scaled and shifted, producing the random variable ​aX+b. The expected value and variance of the resulting variable are

    ​
    E(aX+b)Var(aX+b)​=aE(X)+b=a2Var(X)​
    ​
    Expected value of linear combination of random variables
    AHL AI 4.14

    For random variables ​X​ and ​Y, the expected value of a linear combination of ​X​ and ​Y​ is equivalent to the sum of the transformed variables' expectations. That is,

    ​
    E(aX+bY+c)=aE(X)+bE(Y)+c
    ​


    We can extend this result to the linear combination of any number of independent random variables ​X1​,…Xn​:

    ​
    E(a1​X1​+a2​X2​+⋯+an​Xn​+c)=a1​E(X1​)+a2​E(X2​)+⋯+an​E(Xn​)+c
    ​


    This rule is sometimes referred to as the linearity of expectation.

    Variance of linear combination of random variables
    AHL AI 4.14

    If ​X​ and ​Y​ are independent random variables, the variance of a linear combination of ​X​'s and ​Y​'s is equivalent to the sum of the transformed variances. That is,

    ​
    Var(aX+bY+c)=a2Var(X)+b2Var(Y)
    ​


    We can extend this result to the linear combination of any number of independent random variables ​X1​,…Xn​:

    ​
    Var(a1​X1​+a2​X2​+⋯+an​Xn​+c)=(a1​)2Var(X1​)+(a2​)2Var(X2​)+⋯+(an​)2Var(Xn​)
    ​


    Notice that the variance of the random variable that represents two different observations from one population, ​Var(X1​+X2​)=Var(X1​)+Var(X2​),​ is not equal to the variance of one observation doubled, ​Var(2X1​)=4Var(X1​).

    Sample mean expected value and variance
    AHL AI 4.15

    Suppose ​X1​,X2​,…Xn​​ are ​n​ independent observations of the random variable ​X. We can find the mean of this sample by adding together the observations and dividing by ​n. This defines a new random variable, ​Xˉ, called the sample mean of the ​Xi​:

    ​
    Xˉ=nX1​+X2​+⋯+Xn​​
    ​

    Since ​Xˉ​ is a linear combination of independent random variables, its expected value and variance are given by

    ​
    E(Xˉ)Var(Xˉ)​=E(X)=nVar(X)​​
    ​
    Linear combination of normal random variables
    AHL AI 4.15

    Any linear combination of normally distributed random variables follows a normal distribution.


    For independent random variables ​X​ and ​Y​ with ​X∼N(μX​,σX2​),​ ​Y∼N(μY​,σY2​),​ then the random variable ​W​ given by ​W=aX+bY+c​ follows the distribution

    ​
    W∼N(aμX​+bμY​+c,a2σX2​+b2σY2​)
    ​

    We can extend this result to the linear combination of any number of independent, normally distributed random variables.

    Central Limit Theorem
    AHL AI 4.15

    The Central Limit Theorem states that if ​X1​,X2​,…Xn​​ are independent observations of the random variable ​X, with ​E(Xi​)=μ​ and ​Var(Xi​)=σ2​ for any ​i∈{1,2,…n}, then for sufficiently large ​n​ (typically ​n>30​), regardless of the distribution of ​X, the sum of the ​Xi​​'s is normally distributed:

    ​
    i=1∑n​Xn​∼N(nμ,nσ2)
    ​

    Further, the random variable ​Xˉ=nX1​+X2​+⋯+Xn​​​ is normally distributed with mean ​μ​ and variance ​nσ2​:

    ​
    Xˉ∼N(μ,nσ2​)
    ​