Perplex
  • Lessons
  • Problems
  • Speed Run
  • Practice Tests
  • Skill Checklist
  • Review Videos
  • All Content
  • Landing Page
  • Sign Up
  • Login
  • Perplex
    IB Math AIHL
    /
    Distributions & Random Variables
    /

    Skills

    Skill Checklist

    Track your progress across all skills in your objective. Mark your confidence level and identify areas to focus on.

    Track your progress:

    Don't know

    Working on it

    Confident

    📖 = included in formula booklet • 🚫 = not in formula booklet

    Track your progress:

    Don't know

    Working on it

    Confident

    📖 = included in formula booklet • 🚫 = not in formula booklet

    Distributions & Random Variables

    Skill Checklist

    Track your progress across all skills in your objective. Mark your confidence level and identify areas to focus on.

    28 Skills Available

    Track your progress:

    Don't know

    Working on it

    Confident

    📖 = included in formula booklet • 🚫 = not in formula booklet

    Track your progress:

    Don't know

    Working on it

    Confident

    📖 = included in formula booklet • 🚫 = not in formula booklet

    Discrete random variables

    8 skills
    Concept of a random variable
    SL 4.7

    A random variable is a variable that can take different values, each associated with some probability, arising from a random process or phenomenon. We usually denote random variables with a capital letter, such as X.


    It can be discrete (taken from a finite set of values) or continuous (taking any value within an interval).


    The probability distribution of a random variable tells us how likely each outcome is.

    Watch video explanation →
    Concept of a discrete random variable
    SL 4.7

    A discrete random variable takes from a finite set of values:

    X∈{x1​,x2​…xn​}

    where each possible value has an associated probability.

    Watch video explanation →
    Discrete probabilities sum to 1
    SL 4.7

    The sum of the probabilities for all possible values {x1​,x2​,…xn​} of a discrete random variable X equals 1. In symbols,

    P(U)  ​=P(X=x1​)+P(X=x2​)+...+P(X=xn​)=x∑ ​P(X=x)=1​

    where U denotes the sample space.

    Discrete probability distributions in a table
    SL 4.7

    Probability distributions of discrete random variables can be given in a table or as an expression. As a table, distributions have the form

    x

    x1​

    x2​

    ...

    xn​

    P(X=x)

    P(X=x1​)

    P(X=x2​)


    P(X=xn​)

    where the values in the row P(X=x) sum to 1.

    Watch video explanation →
    Discrete probability distributions as an expression
    SL 4.7

    Probability distributions can be given in a table or as an expression. As an expression, distributions have the form

    P(X=x)=(expression in x),x∈{set of possible x}

    for any discrete random variable X.

    Expected Value
    SL 4.7

    The expected value of a discrete random variable X is the average value you would get if you carried out infinitely many repetitions. It is a weighted sum of all the possible values:

    E(X)=∑x⋅P(X=x)📖

    The expected value is often denoted μ.

    Watch video explanation →
    Fair Games
    SL 4.7

    In probability, a game is a scenario where a player has a chance to win rewards based on the outcome of a probabilistic event. The rewards that a player can earn follow a probability distribution, for example X, that governs the likelihood of winning each reward. The expected return is the reward that a player can expect to earn, on average. It is given by E(X), where X is the probability distribution of the rewards.


    Games can also have a cost, which is the price a player must pay each time before playing the game. If the cost is equal to the expected return, the game is said to be fair.

    Watch video explanation →
    Linear transformations of a random variable
    AHL AI 4.14

    If you add a constant b to every possible value a random variable can take on, its expected value will increase by that constant, reflecting a "shift" in the random variable's distribution. Its variance will remain unchanged, since adding a constant has no impact on the "spread." Multiplying by a constant a scales both expectation and variance:

    E(aX+b)Var(aX+b)​=aE(X)+b=a2Var(X)​

    Binomial Distribution

    4 skills
    The Concept of a Binomial Distribution
    SL 4.8

    The binomial distribution models situations where the same action is repeated multiple times, each with the same chance of success. It has two key numbers: the number of attempts (n) and the probability of success in each attempt (p).


    If a random variable X follows a binomial distribution, we write X∼B(n,p).

    Watch video explanation →
    Binomial PDF with calculator
    SL 4.8

    The binomial probability density function (aka pdf) is a function that models the likelihood of obtaining k successes from n trials where the likelihood of success of each trial is p. We calculate the probability of exactly k successes in n trials, P(X=k), using the calculator's binompdf function.


    Press 2nd → distr → binompdf(. Once in the binompdf function, write your n value after "trials," your p value after "p," and your k value after "x value." Then hit enter twice and the calculator will return the probability you are interested in.


    The distr button is located above vars . Once in the distr menu, you can also click alpha → A to navigate to the binompdf function.

    Watch video explanation →
    Binomial CDF with Calculator
    SL 4.8

    The binomial cumulative density function tells us the probability of obtaining k or fewer successes in n trials, each with a likelihood of success of p. We calculate the probability of less than or equal to k successes in n trials, P(X≤k), using the calculator's binomcdf function.


    Press 2nd → distr → binomcdf(. Once in the binomcdf function, write your n value after "trials," your p value after "p," and your k value after "x value." Then hit enter twice and the calculator will return the probability you are interested in.


    The distr button is located above vars . Once in the distr menu, you can also click alpha → B to navigate to the binomcdf function.

    Watch video explanation →
    Expectation and Variance of Binomial Distribution
    SL 4.8

    If X∼B(n,p), then

    E(X)=np📖

    and

    Var(X)=np(1−p)📖
    Watch video explanation →

    Normal Distribution

    5 skills
    The Normal Distribution
    SL 4.9

    The normal distribution, often called the bell curve, is a symmetric, bell-shaped probability distribution widely used to model natural variability and measurement errors. It appears frequently in natural settings because averaging many small, independent effects tends to produce results that cluster around a central value, naturally forming a bell-shaped distribution.


    The normal distribution is characterized by its mean, μ, and standard deviation, σ, which completely and uniquely describe both the central value and how "spread out" the curve is. By convention, we describing the normal distribution by writing X∼N(μ,σ2). Notice that σ2 is the variance, not the standard deviation.


    The probability that X is less than a given value a, written P(X<a), is equal to the area under the curve to the left of x=a:

    Powered by Desmos

    It follows that the total area under the curve is 1, which is required as the probabilities must sum to 1.

    Watch video explanation →
    The Bell Curve properties
    SL 4.9

    Because of the symmetry of the normal distribution, we know that

    P(X>μ)=P(X<μ)=21​=0.5🚫

    Further, for any real number a,

    P(X>μ+a)=P(X<μ−a)


    Powered by Desmos

    Watch video explanation →
    The Empirical Rule
    SL 4.9

    It is also useful (but not often required) to know the empirical rule:

    P(μ−σ<X<μ+σ)P(μ−2σ<X<μ+2σ)P(μ−3σ<X<μ+3σ)​≈68%≈95%≈99.7%​

    Powered by Desmos

    Normal calculations
    SL 4.9

    To calculate P(a<X<b) for X∼N(μ,σ2) on your GDC, press 2nd →distr (on top of vars) to open the probability distribution menu. Select normalcdf( with your cursor. Type the value of a after "lower," the value of b after "upper," the value of μ after "μ," and the value of √σ2=σ after σ (since the calculator asks for standard deviation, not variance). Then click enter twice and the calculator will return the value of P(a<X<b).


    If you want to find a one-sided probability like P(a<X), enter the value ±1×1099 as the upper or lower bound.


    Under the hood, the calculator is finding the area under the normal curve between x=a and x=b:

    Powered by Desmos

    Watch video explanation →
    Inverse Normal Calculations
    SL 4.9

    The calculator can also perform inverse normal calculations. That is, given the mean μ, the standard deviation σ, and the probability P(X<a)=k, the calculator can find the value a.


    On your GDC, press 2nd →distr (on top of vars) to open the probability distribution menu. Select invNorm( with your cursor. Type the value of k after "area," the value of μ after "μ," and the value of √σ2=σ after σ (since the calculator asks for standard deviation, not variance). Then click enter twice and the calculator will return the value of a.


    Note the calculator specifically returns the value of the "left end" of the tail. To find the value of some b when given P(b<X)=k, enter the value of 1−k (the complement) as the area.

    Watch video explanation →

    Continuous random variables

    1 skill
    Linear Transformation of Random Variables
    AHL AI 4.14

    Suppose that the random variable X is scaled and shifted, producing the random variable aX+b. The expected value and variance of the resulting variable are

    E(aX+b)Var(aX+b)​=aE(X)+b=a2Var(X)​📖
    Watch video explanation →

    Poisson Distribution

    4 skills
    Definition of a Poisson random variable
    AHL AI 4.17

    The Poisson distribution is a discrete probability distribution used to calculate the number of occurrences of an event in a given interval of time or space. In order to use a Poisson distribution, the following conditions must be satisfied:

    1. Events are independent

    2. Events occur at some average rate which is uniform during the period of interest


    If a random variable X follows a Poisson distribution, we write X∼Po(λ), where λ is the average number of occurrences of an event in a given interval. X takes on non-negative integer values, x∈{0,1,2,…}.


    To model count data in a different interval than the one given, we can construct a different random variable Y, whose parameter depends on how much larger or smaller its interval is than X's. In particular, if the interval of X is considered one "unit" and Y models count data for m units, then Y∼Po(m⋅λ).

    Calculating Poisson probabilities using technology
    AHL AI 4.17

    To calculate P(X=x),x∈{0,1,2,…} for X∼Po(λ) using a calculator, press 2nd →distr (on top of vars) to open the probability distribution menu. Select poissonpdf( with your cursor (or press alpha →C ). Type the value of λ after "λ" and the value of x after "x value." Then click paste and enter and the calculator will return the value of P(X=x).


    To calculate P(X≤x),x∈{0,1,2,…} for X∼Po(λ) using a calculator, press 2nd →distr (on top of vars) to open the probability distribution menu. Select poissoncdf( with your cursor (or press alpha →D ). Type the value of λ after "λ" and the value of x after "x value." Then click paste and enter and the calculator will return the value of P(X≤x). Notice that using the cdf function on a calculator will always assume an inclusive less than or equal to "≤", so if you want to find a different inequality, you must adjust your calculation accordingly.

    Mean and Variance of a Poisson are both λ
    AHL AI 4.17

    For a random variable X with X∼Po(λ), the parameter λ is equivalent to both the expectation and variance of the random variable X:

    E(X)=Var(X)=λ
    Sum of Poisson distributions is Poisson
    AHL AI 4.17

    The sum of independent, Poisson distributed random variables follows a Poisson distribution.


    If X and Y are independent random variables with X∼Po(λX​) and Y∼Po(λY​), the random variable Z=X+Y follows the distribution

    Z∼Po(λX​+λY​)

    Sampling, combinations and CLT

    6 skills
    Linear Transformation of Random Variables
    AHL AI 4.14

    Suppose that the random variable X is scaled and shifted, producing the random variable aX+b. The expected value and variance of the resulting variable are

    E(aX+b)Var(aX+b)​=aE(X)+b=a2Var(X)​📖
    Watch video explanation →
    Expected value of linear combination of random variables
    AHL AI 4.14

    For random variables X and Y, the expected value of a linear combination of X and Y is equivalent to the sum of the transformed variables' expectations. That is,

    E(aX+bY+c)=aE(X)+bE(Y)+c


    We can extend this result to the linear combination of any number of independent random variables X1​,…Xn​:

    E(a1​X1​+a2​X2​+⋯+an​Xn​+c)=a1​E(X1​)+a2​E(X2​)+⋯+an​E(Xn​)+c


    This rule is sometimes referred to as the linearity of expectation.

    Variance of linear combination of random variables
    AHL AI 4.14

    If X and Y are independent random variables, the variance of a linear combination of X's and Y's is equivalent to the sum of the transformed variances. That is,

    Var(aX+bY+c)=a2Var(X)+b2Var(Y)


    We can extend this result to the linear combination of any number of independent random variables X1​,…Xn​:

    Var(a1​X1​+a2​X2​+⋯+an​Xn​+c)=(a1​)2Var(X1​)+(a2​)2Var(X2​)+⋯+(an​)2Var(Xn​)


    Notice that the variance of the random variable that represents two different observations from one population, Var(X1​+X2​)=Var(X1​)+Var(X2​), is not equal to the variance of one observation doubled, Var(2X1​)=4Var(X1​).

    Sample mean expected value and variance
    AHL AI 4.15

    Suppose X1​,X2​,…Xn​ are n independent observations of the random variable X. We can find the mean of this sample by adding together the observations and dividing by n. This defines a new random variable, Xˉ, called the sample mean of the Xi​:

    Xˉ=nX1​+X2​+⋯+Xn​​

    Since Xˉ is a linear combination of independent random variables, its expected value and variance are given by

    E(Xˉ)Var(Xˉ)​=E(X)=nVar(X)​​
    Linear combination of normal random variables
    AHL AI 4.15

    Any linear combination of normally distributed random variables follows a normal distribution.


    For independent random variables X and Y with X∼N(μX​,σX2​), Y∼N(μY​,σY2​), then the random variable W given by W=aX+bY+c follows the distribution

    W∼N(aμX​+bμY​+c,a2σX2​+b2σY2​)

    We can extend this result to the linear combination of any number of independent, normally distributed random variables.

    Central Limit Theorem
    AHL AI 4.15

    The Central Limit Theorem states that if X1​,X2​,…Xn​ are independent observations of the random variable X, with E(Xi​)=μ and Var(Xi​)=σ2 for any i∈{1,2,…n}, then for sufficiently large n (typically n>30), regardless of the distribution of X, the sum of the Xi​'s is normally distributed:

    i=1∑n​Xn​∼N(nμ,nσ2)

    Further, the random variable Xˉ=nX1​+X2​+⋯+Xn​​ is normally distributed with mean μ and variance nσ2​:

    Xˉ∼N(μ,nσ2​)