Perplex
Dashboard
Topics
Exponents & LogarithmsApproximations & ErrorSequences & SeriesMatricesComplex NumbersFinancial Mathematics
Cartesian plane & linesFunction TheoryModellingTransformations & asymptotes
2D & 3D GeometryVoronoi DiagramsTrig equations & identitiesVectorsGraph Theory
ProbabilityDescriptive StatisticsBivariate StatisticsDistributions & Random VariablesInference & Hypotheses
DifferentiationIntegrationDifferential Equations
Review VideosFormula BookletMy Progress
BlogLanding Page
Sign UpLogin
Perplex
Perplex
Dashboard
Topics
Exponents & LogarithmsApproximations & ErrorSequences & SeriesMatricesComplex NumbersFinancial Mathematics
Cartesian plane & linesFunction TheoryModellingTransformations & asymptotes
2D & 3D GeometryVoronoi DiagramsTrig equations & identitiesVectorsGraph Theory
ProbabilityDescriptive StatisticsBivariate StatisticsDistributions & Random VariablesInference & Hypotheses
DifferentiationIntegrationDifferential Equations
Review VideosFormula BookletMy Progress
BlogLanding Page
Sign UpLogin
Perplex
/
Distributions & Random Variables
/
Sampling, combinations and CLT
Mixed Practice
Sampling, combinations and CLT
Distributions & Random Variables

Sampling, combinations and CLT

0 of 0 exercises completed

Understanding how the sums of random variables behave, and how sums of large samples of any random variable is roughly normally distributed.

Want a deeper conceptual understanding? Try our interactive lesson!

Linear Transformation of Random Variables
AHL AI 4.14

Suppose that the random variable ​X​ is scaled and shifted, producing the random variable ​aX+b. The expected value and variance of the resulting variable are

​
E(aX+b)Var(aX+b)​=aE(X)+b=a2Var(X)​
​
Expected value of linear combination of random variables
AHL AI 4.14

For random variables ​X​ and ​Y, the expected value of a linear combination of ​X​ and ​Y​ is equivalent to the sum of the transformed variables' expectations. That is,

​
E(aX+bY+c)=aE(X)+bE(Y)+c
​


We can extend this result to the linear combination of any number of independent random variables ​X1​,…Xn​:

​
E(a1​X1​+a2​X2​+⋯+an​Xn​+c)=a1​E(X1​)+a2​E(X2​)+⋯+an​E(Xn​)+c
​


This rule is sometimes referred to as the linearity of expectation.

Variance of linear combination of random variables
AHL AI 4.14

If ​X​ and ​Y​ are independent random variables, the variance of a linear combination of ​X​'s and ​Y​'s is equivalent to the sum of the transformed variances. That is,

​
Var(aX+bY+c)=a2Var(X)+b2Var(Y)
​


We can extend this result to the linear combination of any number of independent random variables ​X1​,…Xn​:

​
Var(a1​X1​+a2​X2​+⋯+an​Xn​+c)=(a1​)2Var(X1​)+(a2​)2Var(X2​)+⋯+(an​)2Var(Xn​)
​


Notice that the variance of the random variable that represents two different observations from one population, ​Var(X1​+X2​)=Var(X1​)+Var(X2​),​ is not equal to the variance of one observation doubled, ​Var(2X1​)=4Var(X1​).

Sample mean expected value and variance
AHL AI 4.15

Suppose ​X1​,X2​,…Xn​​ are ​n​ independent observations of the random variable ​X. We can find the mean of this sample by adding together the observations and dividing by ​n. This defines a new random variable, ​Xˉ, called the sample mean of the ​Xi​:

​
Xˉ=nX1​+X2​+⋯+Xn​​
​

Since ​Xˉ​ is a linear combination of independent random variables, its expected value and variance are given by

​
E(Xˉ)Var(Xˉ)​=E(X)=nVar(X)​​
​
Linear combination of normal random variables
AHL AI 4.15

Any linear combination of normally distributed random variables follows a normal distribution.


For independent random variables ​X​ and ​Y​ with ​X∼N(μX​,σX2​),​  ​Y∼N(μY​,σY2​),​ then the random variable ​W​ given by ​W=aX+bY+c​ follows the distribution

​
W∼N(aμX​+bμY​+c,a2σX2​+b2σY2​)
​

We can extend this result to the linear combination of any number of independent, normally distributed random variables.

Central Limit Theorem
AHL AI 4.15

The Central Limit Theorem states that if ​X1​,X2​,…Xn​​ are independent observations of the random variable ​X, with ​E(Xi​)=μ​ and ​Var(Xi​)=σ2​ for any ​i∈{1,2,…n}, then for sufficiently large ​n​ (typically ​n>30​), regardless of the distribution of ​X, the sum of the ​Xi​​'s is normally distributed:

​
i=1∑n​Xn​∼N(nμ,nσ2)
​

Further, the random variable ​Xˉ=nX1​+X2​+⋯+Xn​​​ is normally distributed with mean ​μ​ and variance ​nσ2​:

​
Xˉ∼N(μ,nσ2​)
​

Nice work completing Sampling, combinations and CLT, here's a quick recap of what we covered:

Skills covered

Mixed Practice

Exercises checked off

I'm Plex, here to help you understand this concept!
/
Distributions & Random Variables
/
Sampling, combinations and CLT
Mixed Practice
Sampling, combinations and CLT
Distributions & Random Variables

Sampling, combinations and CLT

0 of 0 exercises completed

Understanding how the sums of random variables behave, and how sums of large samples of any random variable is roughly normally distributed.

Want a deeper conceptual understanding? Try our interactive lesson!

Linear Transformation of Random Variables
AHL AI 4.14

Suppose that the random variable ​X​ is scaled and shifted, producing the random variable ​aX+b. The expected value and variance of the resulting variable are

​
E(aX+b)Var(aX+b)​=aE(X)+b=a2Var(X)​
​
Expected value of linear combination of random variables
AHL AI 4.14

For random variables ​X​ and ​Y, the expected value of a linear combination of ​X​ and ​Y​ is equivalent to the sum of the transformed variables' expectations. That is,

​
E(aX+bY+c)=aE(X)+bE(Y)+c
​


We can extend this result to the linear combination of any number of independent random variables ​X1​,…Xn​:

​
E(a1​X1​+a2​X2​+⋯+an​Xn​+c)=a1​E(X1​)+a2​E(X2​)+⋯+an​E(Xn​)+c
​


This rule is sometimes referred to as the linearity of expectation.

Variance of linear combination of random variables
AHL AI 4.14

If ​X​ and ​Y​ are independent random variables, the variance of a linear combination of ​X​'s and ​Y​'s is equivalent to the sum of the transformed variances. That is,

​
Var(aX+bY+c)=a2Var(X)+b2Var(Y)
​


We can extend this result to the linear combination of any number of independent random variables ​X1​,…Xn​:

​
Var(a1​X1​+a2​X2​+⋯+an​Xn​+c)=(a1​)2Var(X1​)+(a2​)2Var(X2​)+⋯+(an​)2Var(Xn​)
​


Notice that the variance of the random variable that represents two different observations from one population, ​Var(X1​+X2​)=Var(X1​)+Var(X2​),​ is not equal to the variance of one observation doubled, ​Var(2X1​)=4Var(X1​).

Sample mean expected value and variance
AHL AI 4.15

Suppose ​X1​,X2​,…Xn​​ are ​n​ independent observations of the random variable ​X. We can find the mean of this sample by adding together the observations and dividing by ​n. This defines a new random variable, ​Xˉ, called the sample mean of the ​Xi​:

​
Xˉ=nX1​+X2​+⋯+Xn​​
​

Since ​Xˉ​ is a linear combination of independent random variables, its expected value and variance are given by

​
E(Xˉ)Var(Xˉ)​=E(X)=nVar(X)​​
​
Linear combination of normal random variables
AHL AI 4.15

Any linear combination of normally distributed random variables follows a normal distribution.


For independent random variables ​X​ and ​Y​ with ​X∼N(μX​,σX2​),​  ​Y∼N(μY​,σY2​),​ then the random variable ​W​ given by ​W=aX+bY+c​ follows the distribution

​
W∼N(aμX​+bμY​+c,a2σX2​+b2σY2​)
​

We can extend this result to the linear combination of any number of independent, normally distributed random variables.

Central Limit Theorem
AHL AI 4.15

The Central Limit Theorem states that if ​X1​,X2​,…Xn​​ are independent observations of the random variable ​X, with ​E(Xi​)=μ​ and ​Var(Xi​)=σ2​ for any ​i∈{1,2,…n}, then for sufficiently large ​n​ (typically ​n>30​), regardless of the distribution of ​X, the sum of the ​Xi​​'s is normally distributed:

​
i=1∑n​Xn​∼N(nμ,nσ2)
​

Further, the random variable ​Xˉ=nX1​+X2​+⋯+Xn​​​ is normally distributed with mean ​μ​ and variance ​nσ2​:

​
Xˉ∼N(μ,nσ2​)
​

Nice work completing Sampling, combinations and CLT, here's a quick recap of what we covered:

Skills covered

Mixed Practice

Exercises checked off

I'm Plex, here to help you understand this concept!

Generating starter questions...

Generating starter questions...