Mathematical Methods in PDEs and Statistical Analysis
Partial Differential Equations (PDE)
A Partial Differential Equation (PDE) involves a function u(x, y, …) and its partial derivatives.
- Homogeneous: If every term in the equation contains the dependent variable u or its derivatives. The general solution is simply the Complementary Function (C.F.).
- Non-Homogeneous: If there is a term that is a function of the independent variables only (f(x, y)). The solution is u = C.F. + P.I. (Particular Integral). Example: ∇2u = f(x, y) (Poisson’s Equation).
The Multiplier Method (Lagrange’s Linear PDE)
To solve Pp + Qq = RS, we use the auxiliary equation: dx/P = dy/Q = dz/R.
The Method: We find multipliers (l, m, n) such that lP + mQ + nR = 0, which implies l dx + m dy + n dz = 0. By integrating this, we get a constant u(x, y, z) = C1. We repeat the process to find v(x, y, z) = C2. The general solution is Φ(u, v) = 0.
Probability Distributions
Binomial Distribution
Used for n independent trials with a constant probability of success p.
Formula:
Mean: np
Variance: npq
Poisson Distribution
A limiting case of the Binomial distribution where n approaches infinity and p approaches 0, such that np = λ (a constant).
Formula:
Mean = Variance = λ
Normal Distribution (Gaussian)
The most important continuous distribution for natural phenomena.
Standard Normal Variable (Z): Z = (X – μ) / σ, which centers the mean at 0 and variance at 1.
Moments and Generating Functions
Moments about Mean vs. Origin
- About Origin (μ′r): The r-th moment is E[Xr]. The first moment μ′1 is the Mean.
- About Mean (μr): Also called central moments. μr = E[(X – μ)r].
- μ1 = 0
- μ2 = σ2 (Variance)
- Relationship: μ2 = μ′2 – (μ′1)2
Moment Generating Function (MGF)
The MGF “encodes” all moments of a distribution into a single function.
Derivation of Moments: To find the r-th moment about the origin, differentiate the MGF r times and evaluate at t = 0.
Skewness
Measures the lack of symmetry in a distribution.
- Positive Skew: Tail extends to the right (Mean > Median > Mode).
- Negative Skew: Tail extends to the left (Mode > Median > Mean).
- Formula (Karl Pearson): Sk = (Mean – Mode) / σ
Covariance, Correlation, and Regression
Covariance: Cov(X, Y) = E[XY] – E[X]E[Y]. It shows the direction of the linear relationship.
Karl Pearson Correlation (r): Normalizes covariance to a range of -1 to +1.
Rank Correlation (Spearman’s): Used when data is ranked (qualitative).
Regression Analysis
Predicts the dependent variable based on the independent variable.
Regression Line (Y on X): y – ȳ = Byx(x – x̄)
Regression Coefficient: Byx represents the slope of the line.
Curve Fitting (Method of Least Squares)
To fit a curve y = f(x) to a set of points, we minimize the sum of the squares of the residuals.
For a straight line (y = a + bx):
1. Σy = na + bΣx
2. Σxy = aΣx + bΣx2
Inferential Statistics: Hypothesis Testing
Test of Hypothesis: A procedure to decide whether to accept or reject a statistical claim.
- Null Hypothesis (H0): Statement of no change or no effect.
- Alternative Hypothesis (H1): What we suspect is true.
- Level of Significance (α): Usually 5% (0.05).
- Critical Region: If the calculated test statistic falls here, we reject H0.
Chi-Square (χ2) Test
Used to check if observed frequencies (O) match expected frequencies (E).
Formula: χ2 = Σ [ (Oi – Ei)2 / Ei ]
Degrees of Freedom (df): (n – 1) for a 1D table, or (r – 1)(c – 1) for a contingency table.
Recursion (Statistical Context)
In probability, recursion is often used to find moments for complex distributions. For the Poisson Distribution, the central moments follow a specific recurrence relation.
Extended Reference and Review
A PDE involves a function u(x, y, …) and its partial derivatives. Homogeneous: If every term in the equation contains the dependent variable u or its derivatives. The general solution is simply the Complementary Function (C.F.).
Non-Homogeneous: If there is a term that is a function of the independent variables only (f(x, y)). The solution is u = C.F. + P.I. (Particular Integral). Example: ∇2u = f(x, y) (Poisson’s Equation).
The Multiplier Method (Lagrange’s Linear PDE)
To solve Pp + Qq = RS, we use the auxiliary equation: dx/P = dy/Q = dz/R. The Method: We find multipliers (l, m, n) such that lP + mQ + nR = 0, which implies l dx + m dy + n dz = 0. By integrating this, we get a constant u(x, y, z) = C1. We repeat the process to find v(x, y, z) = C2. The general solution is Φ(u, v) = 0.
Distribution Summary
Binomial Distribution: Used for n independent trials with a constant probability of success p. Formula: Mean: np, Variance: npq.
Poisson Distribution: A limiting case of Binomial where n goes to infinity and p goes to 0, such that np = λ (a constant). Formula: Mean = Variance = λ.
Normal Distribution (Gaussian): The most important continuous distribution for natural phenomena. Standard Normal Variable (Z): Z = (X – μ) / σ, which centers the mean at 0 and variance at 1.
Moments and Generating Functions
Moments about Mean vs. Origin:
About Origin (μ′r): The r-th moment is E[Xr]. The first moment μ′1 is the Mean.
About Mean (μr): Also called central moments. μr = E[(X – μ)r]. μ1 = 0, μ2 = σ2 (Variance). Relationship: μ2 = μ′2 – (μ′1)2.
Moment Generating Function (MGF): The MGF “encodes” all moments of a distribution into a single function. Derivation of Moments: To find the r-th moment about the origin, differentiate the MGF r times at t = 0.
Skewness: Measures the lack of symmetry in a distribution. Positive Skew: Tail extends to the right (Mean > Median > Mode). Negative Skew: Tail extends to the left (Mode > Median > Mean). Formula (Karl Pearson): Sk = (Mean – Mode) / σ.
Correlation and Regression
Covariance and Correlation: Cov(X, Y) = E[XY] – E[X]E[Y]. It shows the direction of the linear relationship. Karl Pearson Correlation (r): Normalizes covariance to a range of -1 to +1. Rank Correlation (Spearman’s): Used when data is ranked (qualitative).
Regression Analysis: Predicts the dependent variable based on the independent variable. Regression Line (Y on X): y – ȳ = Byx(x – x̄). Regression Coefficient: Byx.
Curve Fitting (Method of Least Squares): To fit a curve y = f(x) to a set of points, we minimize the sum of the squares of the residuals. For a straight line (y = a + bx): Σy = na + bΣx and Σxy = aΣx + bΣx2.
Hypothesis Testing and Chi-Square
Inferential Statistics: Test of Hypothesis is a procedure to decide whether to accept or reject a statistical claim. Null Hypothesis (H0): Statement of no change. Alternative Hypothesis (H1): What we suspect is true. Level of Significance (α): Usually 0.05. Critical Region: If the calculated test statistic falls here, we reject H0.
Chi-Square (χ2) Test: Used to check if observed frequencies (O) match expected frequencies (E). Formula: χ2 = Σ [ (Oi – Ei)2 / Ei ]. Degrees of Freedom (df): (n – 1) or (r – 1)(c – 1).
Recursion (Statistical context): In probability, recursion is often used to find moments for complex distributions. For the Poisson Distribution, the central moments follow a specific recurrence relation.
