Mathematical nostalgia
Logic and Proofs
First-Order Logic
Implication
\[ \begin{aligned} &P \implies Q \\ & P \\ \therefore\ &Q \end{aligned} \]
Negation
\[ \neg(P \implies Q) \equiv P \land \neg Q \] \[ P \implies Q \equiv \neg P \lor Q \]
Converse
Reversing the sufficient and necessary conditions. As a syllogistic form, known as affirming the consequent, a fallacy.
\[ P \implies Q \] \[ Q \implies P \]
Contrapositive
Negating the necessary condition implies the negation of the sufficient condition.
\[ \neg Q \implies \neg P \]
As a syllogistic form, known as modus tollens.
\[ \begin{aligned} &P \implies Q \\ \neg &Q \\ \therefore\ \neg &P \end{aligned} \]
DeMorgan’s Law
\[ \neg (A \land B) \iff \neg A \lor \neg B \] \[ \neg (A \lor B) \iff \neg A \land \neg B \]
Quantifiers
Existential: \(P\) is true for at least one \(x\)
\[ \exists x Px \]
Universal: \(P\) is true for all \(x\)
\[ \forall x Px \]
Combinations:
For all \(x\), there exists at least one \(y\) for which \(P\) is true.
\[ \forall x \exists y Pxy \]
There exists an \(x\) for which \(P\) holds true for all \(y\).
\[ \exists x \forall y Pxy \]
Proof forms: direct
Proof by exhaustion
Break up the problem into cases that exhaustively cover the possibilities, and prove each case. Common in game theory, and any problem where different regions of the parameter space behave differently and admit different solutions.
Proof by construction
If you can construct an example of something, then it exists. Typical for existence proofs.
Proof by induction
Given (or having proved) a claim for a base case, if the claim can be proved for a generic inductive step (i.e., that given its truth for iteration \(n\), it follows that it’s true for \(n+1\)), then it follows that the claim is true for the sequence as a whole.
Example: Sum of a sequence
Prove that \[ \sum_{i=1}^n i = \frac{n(n+1)}{2} \]
For the base case,
\[ \sum_{i=1}^1 i = \frac{1(1+1)}{2} = 1 \]
For the \(n\text{th}\) case, assume truth for \(k\),
\begin{equation} \sum_{i=1}^k i = \frac{k(k+1)}{2} \label{eq:seqsum} \end{equation}
and show it remains true for \(k + 1\):
\[ \sum_{i=1}^{k+1} i = \frac{(k+1)(k+2)}{2} \]
partitioning the sequence, by \eqref{eq:seqsum} we have
\begin{align*} \sum_{i=1}^k i + (k+1) &= \frac{k(k+1)}{2} + (k + 1)\\ &= \frac{k(k+1) + 2k + 2}{2} \\ &= \frac{k^2 + k + 2k + 2}{2} \\ &= \frac{(k+1)(k+2)}{2} \\ \end{align*}
Proof forms: indirect
Proof by counterexample
Straightforward. For the categorical proposition
\[A \land B \implies D\]
a single counterexample, where
\[A \land B \land \neg D\]
suffices to disprove the proposition.
Proof by contradiction
A special case of reductio ad absurdum premised on the negation of the proposition intended to be proved.
That is, if you want to prove \(P\), assume \(\neg P\) and deduce from that premise that \(Q \land \neg Q\).
Transcendental Functions
Logarithms, Exponents, Roots
Logarithms and exponents can be thought of as inverses of each other. Given two of the variables in \( b^n = x \), the third can be deduced using exponents to solve for \(x\), logarithms for \(n\), and the \(\nth\)-root for \(b\).
\begin{align*} 2^3 = x &\implies x = 2^3 = 2 \cdot 2 \cdot 2 = 8 \\ 2^n = 8 &\implies n = \log_2{8} = 3 \\ b^3 = 8 &\implies b = \sqrt[3]{8} = 2 \\ \end{align*}
The logarithm is a powerful tool for simplifying functions. Often used to transform an exponential function to a linear one, or a linear function to a nonlinear one in which the impact of one variable or another declines as the first variable rises in value.
Examples:
- When plotting a linear and exponential function on the same graph.
- The principle of diminishing returns is captured graphically by \(y=\ln(x)\).
Logs can be written in any base. When the base is omitted, context determines the implicit base. In computer science, \(\log \equiv \log_2\). Elsewhere, it generally denotes base 10 \(\log \equiv \log_{10}\), but sometimes it denotes log base \(e\). Regardless of context, \(\ln \equiv \log_e\).
Properties of logarithms
Logarithms are only defined on \(\NN\). This follows from the identity
\[a^{\log_a x } = x\]
Let \(\log_a x = b\). Assume \(a > 0\) and \(x \le 0\). Then \(a^b = x \le 0\) for \(a > 0\). Thus \(b\) is undefined.
Loosely, logarithms transform multiplication/division into addition/subtraction and vice versa.
The log of a product is equivalent to the sum of the logs:
\[ \ln(x_1 \cdot x_2) = \ln(x_1) + \ln(x_2), \cond{for}{x_1,x_2 > 0} \]
and the log of a quotient is equivalent to the difference of the logs:
\[ \ln\left(\frac{x_1}{x_2}\right) = \ln(x_1) - \ln(x_2), \cond{for}{x_1,x_2 > 0} \]
Note that addition and subtraction of logs do not distribute, because the log of a product does not equal the product of a log.
The log of an exponent is equal to the product of the exponent and log. That is,
\[ \log(x^b) = b \log x \]
Lastly, \(\log(1) = 0\) and
\[ \log(x+1) \approx x, \cond{as}{x \rightarrow 0} \]
Trigonometric
Fundamental identities:
\begin{align*} \sin^2\theta + \cos^2\theta = 1 \\ \sec^2\theta - \tan^2\theta = 1 \\ \csc^2\theta - \cot^2\theta = 1 \end{align*}
\begin{align*} \csc \theta &= \sin^{-1}\theta \\ \sec \theta &= \cos^{-1}\theta \\ \cot \theta &= \tan^{-1}\theta \end{align*}
\begin{align*} \sin(-\theta) &= -\sin\theta \\ \tan(-\theta) &= -\tan\theta \\ \cos(-\theta) &= \cos\theta \end{align*}
Sequences and Series
A sequence is an ordered list of numbers, a series the sum of a sequence.
The limit of a sequence \({x_i}\) is a number \(L\) such that \(\lim_{i\rightarrow\infty} x_i=L\).
A sequence is said to converge if it has a finite limit, and diverge if it has no such limit or a limit of ±∞.
Power Series
An infinite series of the form
\begin{align*} g(x) &= \sum_{n=0}^\infty a_n (x - c)^n \\ &= a_0 + a_1 (x - c) + a_2 (x-c)^2 + \ldots \end{align*}
Taylor Series
The Taylor series / expansion of a function is an infinite sum of terms that expressed in terms of the function’s derivatives at a single point.
\begin{align*} g(x) &= \sum_{n=0}^\infty \frac{f^{(n)}(p)}{n!} (x - p)^n \\ &= f(p) + \frac{f’(p)}{1!} (x-p) + \frac{f’’(p)}{2!} (x-p)^2 + \ldots \end{align*}
Maclaurin Series
A Maclaurin series is a Taylor series expansion of a function about 0,
\begin{align*} g(x) &= \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!} x^n \\ &= f(0) + \frac{f’(0)}{1!} x + \frac{f’’(p)}{2!} x^2 + \ldots \end{align*}
Limits and Continuity
Definition
Let \(f(x)\) be a function defined on an interval that contains \(x = a\), except possibly at \(x = a\).
Then call \(L\) the limit of \(f(x)\) as \(x\) approaches \(a\), that is,
\[ \lim_{x \rightarrow a} f(x) = L \]
if for every number \(\epsilon > 0\) there is some number \(\delta > 0\) such that
\[ \abs{f(x) - L} < \epsilon \stext{whenever} 0 < \abs{x-a} < \delta \]
That is, loosely, as the input approaches the target value, the difference between the output value and \(L\) monotonically decreases.
If the limits from above and below are equal, the function has a unique limit at \(c\). That is,
\[ \lim_{x \rightarrow c^-} f(x) = \lim_{x \rightarrow c^+ }f(x) = \lim_{x \rightarrow c} f(x) \]
Properties
For any \(f(x)\) and \(g(x)\) that both have a well-defined limit at \(x = c\), we have
\begin{align*} \lim_{x \rightarrow c} \paren{f(x) + g(x)} &= \lim_{x \rightarrow c} f(x) + \lim_{x \rightarrow c} g(x) \\ \lim_{x \rightarrow c} \paren{f(x) - g(x)} &= \lim_{x \rightarrow c} f(x) - \lim_{x \rightarrow c} g(x) \\ \lim_{x \rightarrow c} \paren{f(x) \cdot g(x)} &= \lim_{x \rightarrow c} f(x) \cdot \lim_{x \rightarrow c} g(x) \\ \lim_{x \rightarrow c} \paren{\rfrac{f(x)}{g(x)}} &= \frac{\lim_{x \rightarrow c} f(x)}{\lim_{x \rightarrow c} g(x)} \end{align*}
Continuity
A function is said to be continuous at \(x = a\) if
\[ \lim_{x \rightarrow a} f(x) = f(a) \]
and continuous on \([a, b]\) if it is continuous at each point in the interval.