# Mathematical nostalgia

# Logic and Proofs

## First-Order Logic

### Implication

\[ \begin{aligned} &P \implies Q \\ & P \\ \therefore\ &Q \end{aligned} \]

### Negation

\[ \neg(P \implies Q) \equiv P \land \neg Q \] \[ P \implies Q \equiv \neg P \lor Q \]

### Converse

Reversing the sufficient and necessary conditions. As a syllogistic form, known as affirming the consequent, a fallacy.

\[ P \implies Q \] \[ Q \implies P \]

### Contrapositive

Negating the necessary condition implies the negation of the sufficient condition.

\[ \neg Q \implies \neg P \]

As a syllogistic form, known as modus tollens.

\[ \begin{aligned} &P \implies Q \\ \neg &Q \\ \therefore\ \neg &P \end{aligned} \]

### DeMorgan’s Law

\[ \neg (A \land B) \iff \neg A \lor \neg B \] \[ \neg (A \lor B) \iff \neg A \land \neg B \]

### Quantifiers

Existential: \(P\) is true for at least one \(x\)

\[ \exists x Px \]

Universal: \(P\) is true for all \(x\)

\[ \forall x Px \]

Combinations:

For all \(x\), there exists at least one \(y\) for which \(P\) is true.

\[ \forall x \exists y Pxy \]

There exists an \(x\) for which \(P\) holds true for all \(y\).

\[ \exists x \forall y Pxy \]

## Proof forms: direct

### Proof by exhaustion

Break up the problem into cases that exhaustively cover the possibilities, and prove each case. Common in game theory, and any problem where different regions of the parameter space behave differently and admit different solutions.

### Proof by construction

If you can construct an example of something, then it exists. Typical for existence proofs.

### Proof by induction

Given (or having proved) a claim for a base case, if the claim can be proved for a generic inductive step (i.e., that given its truth for iteration \(n\), it follows that it’s true for \(n+1\)), then it follows that the claim is true for the sequence as a whole.

Example: Sum of a sequence

Prove that \[ \sum_{i=1}^n i = \frac{n(n+1)}{2} \]

For the base case,

\[ \sum_{i=1}^1 i = \frac{1(1+1)}{2} = 1 \]

For the \(n\text{th}\) case, assume truth for \(k\),

\begin{equation} \sum_{i=1}^k i = \frac{k(k+1)}{2} \label{eq:seqsum} \end{equation}

and show it remains true for \(k + 1\):

\[ \sum_{i=1}^{k+1} i = \frac{(k+1)(k+2)}{2} \]

partitioning the sequence, by \eqref{eq:seqsum} we have

\begin{align*} \sum_{i=1}^k i + (k+1) &= \frac{k(k+1)}{2} + (k + 1)\\ &= \frac{k(k+1) + 2k + 2}{2} \\ &= \frac{k^2 + k + 2k + 2}{2} \\ &= \frac{(k+1)(k+2)}{2} \\ \end{align*}

## Proof forms: indirect

### Proof by counterexample

Straightforward. For the categorical proposition

\[A \land B \implies D\]

a single counterexample, where

\[A \land B \land \neg D\]

suffices to disprove the proposition.

### Proof by contradiction

A special case of reductio ad absurdum premised on the negation of the proposition intended to be proved.

That is, if you want to prove \(P\), assume \(\neg P\) and deduce from that premise that \(Q \land \neg Q\).

# Transcendental Functions

## Logarithms, Exponents, Roots

Logarithms and exponents can be thought of as inverses of each other. Given two of the variables in \( b^n = x \), the third can be deduced using exponents to solve for \(x\), logarithms for \(n\), and the \(\nth\)-root for \(b\).

\begin{align*} 2^3 = x &\implies x = 2^3 = 2 \cdot 2 \cdot 2 = 8 \\ 2^n = 8 &\implies n = \log_2{8} = 3 \\ b^3 = 8 &\implies b = \sqrt[3]{8} = 2 \\ \end{align*}

The logarithm is a powerful tool for simplifying functions. Often used to transform an exponential function to a linear one, or a linear function to a nonlinear one in which the impact of one variable or another declines as the first variable rises in value.

Examples:

- When plotting a linear and exponential function on the same graph.
- The principle of diminishing returns is captured graphically by \(y=\ln(x)\).

Logs can be written in any base. When the base is omitted, context determines the implicit base. In computer science, \(\log \equiv \log_2\). Elsewhere, it generally denotes base 10 \(\log \equiv \log_{10}\), but sometimes it denotes log base \(e\). Regardless of context, \(\ln \equiv \log_e\).

## Properties of logarithms

Logarithms are only defined on \(\NN\). This follows from the identity

\[a^{\log_a x } = x\]

Let \(\log_a x = b\). Assume \(a > 0\) and \(x \le 0\). Then \(a^b = x \le 0\) for \(a > 0\). Thus \(b\) is undefined.

Loosely, logarithms transform multiplication/division into addition/subtraction and vice versa.

The log of a product is equivalent to the sum of the logs:

\[ \ln(x_1 \cdot x_2) = \ln(x_1) + \ln(x_2), \cond{for}{x_1,x_2 > 0} \]

and the log of a quotient is equivalent to the difference of the logs:

\[ \ln\left(\frac{x_1}{x_2}\right) = \ln(x_1) - \ln(x_2), \cond{for}{x_1,x_2 > 0} \]

Note that addition and subtraction of logs do not distribute, because the log of a product does not equal the product of a log.

The log of an exponent is equal to the product of the exponent and log. That is,

\[ \log(x^b) = b \log x \]

Lastly, \(\log(1) = 0\) and

\[ \log(x+1) \approx x, \cond{as}{x \rightarrow 0} \]

## Trigonometric

Fundamental identities:

\begin{align*} \sin^2\theta + \cos^2\theta = 1 \\ \sec^2\theta - \tan^2\theta = 1 \\ \csc^2\theta - \cot^2\theta = 1 \end{align*}

\begin{align*} \csc \theta &= \sin^{-1}\theta \\ \sec \theta &= \cos^{-1}\theta \\ \cot \theta &= \tan^{-1}\theta \end{align*}

\begin{align*} \sin(-\theta) &= -\sin\theta \\ \tan(-\theta) &= -\tan\theta \\ \cos(-\theta) &= \cos\theta \end{align*}

# Sequences and Series

A sequence is an ordered list of numbers, a series the sum of a sequence.

The **limit** of a sequence \({x_i}\) is a number \(L\) such that
\(\lim_{i\rightarrow\infty} x_i=L\).

A sequence is said to **converge** if it has a finite limit, and diverge if it has
no such limit or a limit of ±∞.

## Power Series

An infinite series of the form

\begin{align*} g(x) &= \sum_{n=0}^\infty a_n (x - c)^n \\ &= a_0 + a_1 (x - c) + a_2 (x-c)^2 + \ldots \end{align*}

## Taylor Series

The Taylor series / expansion of a function is an infinite sum of terms that expressed in terms of the function’s derivatives at a single point.

\begin{align*} g(x) &= \sum_{n=0}^\infty \frac{f^{(n)}(p)}{n!} (x - p)^n \\ &= f(p) + \frac{f’(p)}{1!} (x-p) + \frac{f’’(p)}{2!} (x-p)^2 + \ldots \end{align*}

## Maclaurin Series

A Maclaurin series is a Taylor series expansion of a function about 0,

\begin{align*} g(x) &= \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!} x^n \\ &= f(0) + \frac{f’(0)}{1!} x + \frac{f’’(p)}{2!} x^2 + \ldots \end{align*}

# Limits and Continuity

## Definition

Let \(f(x)\) be a function defined on an interval that contains \(x = a\), except possibly at \(x = a\).

Then call \(L\) the limit of \(f(x)\) as \(x\) approaches \(a\), that is,

\[ \lim_{x \rightarrow a} f(x) = L \]

if for every number \(\epsilon > 0\) there is some number \(\delta > 0\) such that

\[ \abs{f(x) - L} < \epsilon \stext{whenever} 0 < \abs{x-a} < \delta \]

That is, loosely, as the input approaches the target value, the difference between the output value and \(L\) monotonically decreases.

If the limits from above and below are equal, the function has a unique limit at \(c\). That is,

\[ \lim_{x \rightarrow c^-} f(x) = \lim_{x \rightarrow c^+ }f(x) = \lim_{x \rightarrow c} f(x) \]

## Properties

For any \(f(x)\) and \(g(x)\) that both have a well-defined limit at \(x = c\), we have

\begin{align*} \lim_{x \rightarrow c} \paren{f(x) + g(x)} &= \lim_{x \rightarrow c} f(x) + \lim_{x \rightarrow c} g(x) \\ \lim_{x \rightarrow c} \paren{f(x) - g(x)} &= \lim_{x \rightarrow c} f(x) - \lim_{x \rightarrow c} g(x) \\ \lim_{x \rightarrow c} \paren{f(x) \cdot g(x)} &= \lim_{x \rightarrow c} f(x) \cdot \lim_{x \rightarrow c} g(x) \\ \lim_{x \rightarrow c} \paren{\rfrac{f(x)}{g(x)}} &= \frac{\lim_{x \rightarrow c} f(x)}{\lim_{x \rightarrow c} g(x)} \end{align*}

## Continuity

A function is said to be continuous at \(x = a\) if

\[ \lim_{x \rightarrow a} f(x) = f(a) \]

and continuous on \([a, b]\) if it is continuous at each point in the interval.

# Calculus

## Differentiation

### Definition

\[ f’(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} \]

### Power Rule

For a function of the form \( f(x) = x^n \), the derivative is given by \[ f’(x) = n \cdot x^{n-1} \]

### Product Rule

For a product of functions \( f(x) \cdot g(x) \), their derivative is given by \[ (f \cdot g)’ = f’ \cdot g + f \cdot g’ \]

### Quotient Rule

Given two functions \(f(x)\) and \(g(x)\), where \(g(x) \neq 0\), the derivative of their quotient with respect to \(x\) is given by \[ \left( \frac{f}{g} \right)’ = \frac{f’ \cdot g - f \cdot g’}{g^2} \]

### Chain Rule

To find the derivative of composed functions, assuming each function is differentiable at their respective inputs, multiply: (1) the derivative of the outer function evaluated at the inner function, and (2) the derivative of the inner function: \[ (f \circ g)’(x) = g’(x) \cdot f’(g(x)) \] Apply repeatedly for deeply composed functions, for example \(h \circ g \circ u = h(g(u(x)))\): \[ (h \circ g \circ u)’(x) = h’(g) \cdot g’(u) \cdot u’(x) \]

### Implicit differentiation

Given an implicit equation relating, say, \(x\) and \(y\), differentiate both sides with respect to \(x\), applying the chain rule as needed, then isolate \(y’\) and simplify. Example:

\begin{align*} x^2 + y^2 &= 1 \\ \frac{\mathrm{d}}{\mathrm{d}x}\left( x^2 + y^2 \right) &= \frac{\mathrm{d}}{\mathrm{d}x}(1) \\ \frac{\mathrm{d}}{\mathrm{d}x}\left( x^2 \right) + \frac{\mathrm{d}}{\mathrm{d}x}\left( y^2 \right) &= 0 \\ 2x + 2y \cdot \frac{\mathrm{d} y}{\mathrm{d} x} &= 0 \\ \implies \frac{\mathrm{d} y}{\mathrm{d} x} &= - \frac{x}{y} \end{align*}

### L’Hôpital’s Rule

Used to evaluate indeterminate forms that arise when taking the limit of a quotient of two functions, where both numerator and denominator approach zero or infinity.

Loosely: if you’re evaluating the limit of a quotient in which both divisors approach \(0\) or \(\infty\), and where directly substituting with the limit results in an indeterminate form like \(\frac{0}{0}\) or \(\frac{\infty}{\infty}\), then you can equivalently evaluate the limit of the quotient of the derivatives.

More precisely: Suppose \(f\) and \(g\) are differentiable on an open interval containing \(a\), except possibly at \(a\). If:

\[ \lim_{x \to a} f(x) = 0 \textrm{ and } \lim_{x \to a}g(x) = 0, \textrm{ or } \] \[ \lim_{x \to a} f(x) = \pm\infty \textrm{ and } \lim_{x \to a}g(x) = \pm\infty, \]

and \(g’(x)\) is not zero in a punctured neighborhood of \(a\), then:

\[ \lim_{x \to a} \frac{f(x)}{g(x)} = \lim_{x \to a} \frac{f’(x)}{g’(x)} \]

Example:

In the following, as \(x \to 0\), both \(\sin(x)\) and \(x\) approach \(0\). Direct substitution yields the indeterminate form \(\frac{0}{0}\). Applying L’Hôpital’s Rule,

\begin{align*} \lim_{x \to 0} \frac{\sin(x)}{x} &= \lim_{x \to 0} \frac{\cos(x)}{1} \\ &= \lim_{x \to 0} \cos(x) \\ &= \cos(0) \\ &= 1 \end{align*}

### Related rates

Technique used to find the rate of change of a quantity that depends on other variable quantities at a specific instant in time.

Example: Consider a right circular cone with height \(h\) and radius \(a\). If the radius is increasing at a rate of 2 cm/s and the height is increasing at a rate of 3 cm/s, find the rate at which the volume of the cone is changing when the radius is 4 cm and the height is 6 cm.

\begin{align*} V &= \frac{1}{3} \pi r^2 h \\ \frac{dV}{dt} &= \frac{d}{dt}\left[\frac{1}{3} \pi r^2\right] h + \frac{1}{3} \pi r^2 \frac{dh}{dt}\\ &= \frac{1}{3} \pi 2rh \frac{dr}{dt} + \frac{1}{3} \pi r^2 \frac{dh}{dt}\\ &= \frac{1}{3} \pi r \left( 2h \dot{r} + r \dot{h} \right)\\ &= \frac{1}{3} \pi (4) (2 \cdot 6 \cdot 2 + 4 \cdot 3)\\ &= 48 \pi \sfrac{\mathrm{cm}^3}{\mathrm{s}} \end{align*}

### Linear approximations

Approximate the value of a function \(f\) near a specific point \(a\) by evaluating it using the derivative of the function (the equation of the tangent line).

From the point-slope form of the equation for the line tangent to \(f\) at \(a\), we can derive the formula for linear approximation as follows:

\begin{align*} y - y_1 &= m (x - x_1) \\ y - f(a) &= f’(a) (x - a) \\ y &= f(a) + f’(a) (x-a) \end{align*}

That is, the function \(L(x)\) approximates \(f(x)\) near \(x = a\):

\[ L(x) = f(a) + f’(a)(x - a) \]

Example:

To approximate \(\sqrt{9.1}\) near 9,

\begin{align*} L(x) &= \sqrt{a} + \frac{1}{2} (a)^{-\frac{1}{2}} (x - a) \\ L(9.1) &= \sqrt{9} + \frac{1}{2} \frac{1}{\sqrt{9}} (9.1 - 9) \\ &= 3 + \frac{1}{2 \cdot 3 \cdot 10} \\ &= 3 + \frac{1}{60} \\ &\approx 3.0167 \end{align*}

## Extrema and optimization

### Extreme Value Theorem

A function that’s continuous on a closed interval is guaranteed to have an absolute maximum and minimum somewhere in that interval or at its bounds. Leveraged in optimization problems.

**Formally**:

If \(f\) is continuous on the closed interval \([a, b]\) then there exist points \(c\) and \(d\) in \([a, b]\) such that \(f( c) \leq f(x) \leq f(d)\) for all \(x \in [a, b]\).

### Mean Value Theorem

If a function is continuous and differentiable on an interval, there is at least one point on that interval where the instantaneous rate of change is equal to the average rate of change over the interval.

**Formally**:

If \(f\) is continuous on \([a, b]\) and differentiable on \((a, b)\), then there exists at least one point \(c \in (a, b)\) such that \[ f’( c) = \frac{f(b) - f(a)}{b-a} \]

- Applications: Fundamental theorem of calculus, proving inequalities & monotonicity
- Special case: Rolle’s Theorem (existence of critical points: \(f’( c) = 0\))
- Generalization: Cauchy’s Mean Value Theorem. Used in the proof of L’Hôpital’s Rule. If both \(f\) and \(g\) are continuous and differentiable as expected, then \[ \frac{f’( c)}{g’( c)} = \frac{f(b) - f(a)}{g(b) - g(a)} \]

### Concavity and convexity

**Concavity**: (Concave Up) The graph of a function lies below its tangent lines
at all points in the interval of interest.

**Convexity**: (Concave Down) The graph of a function lies above its tangent lines
at all points in the interval of question

A **convex function** \(f(x)\) is so called when for any two points \(a\) and \(b\) in its
domain and any \(\lambda \in [0, 1]\),
\[ f(\lambda a + (1-\lambda)b) \leq \lambda f(a) + (1-\lambda)f(b) \]

A **concave function** is defined similarly, but with a flipped inequality:
\[ f(\lambda a + (1-\lambda)b) \geq \lambda f(a) + (1-\lambda)f(b) \]

### Optimization

**Critical points**. Point \(x = c\) at which \(f’(x)\) is zero or undefined.**First derivative test for local extremum**. Local minimum at \(c\) if the sign of the derivative goes from negative to positive around \(c\). Local max if it goes from positive to negative. Not a local extremum if the sign does not change.**Second derivative test for concavity**. Confirm (2) by evaluating \(f’’( c)\). Concave up corresponds to a local minimum: \(f’’( c) > 0\). Concave down corresponds to a local maximum: \(f’’( c) < 0\). Inconclusive if \(f’’( c) = 0\). Only applies to differentiable functions.**Check endpoints**. If applicable (when the function is defined on a closed interval), evaluate \(f\) at endpoints to compare and determine if local extrema are global.

## Fundamental Theorem of Calculus (FTC)

Expresses the relationship between differentiation and integration and provides the logic for evaluating definite integrals.

**Part 1**

If \(f\) is continuous on \([a,b]\) and \(F\) is its antiderivative (i.e., \(F’ = f\)), then

\[ \Int{x}{f(x)}{a}{b} = F(b) - F(a). \]

**Interpretation**

Integration can be used to find the net area under the curve of \(f(x)\) on \([a,b]\). The difference in the antiderivative evaluated at the interval bounds can be expressed with the more compact notation \(F(x)\big\vert_a^b\).

**Part 2**

Given a function \( \displaystyle G(x) = \Int{t}{f(t)}{a}{x} \), where \(x \in [a, b]\), if \(G\) is differentiable on \((a, b)\) and \(f\) is continuous on \([a, b]\), then

\[G’(x) = f(x)\]

**Interpretation**

The derivative of the integral of \(f\) with respect to its upper limit is \(f(x)\) itself. The upper limit is the variable of interest here because the aim is to demonstrate the relationship between differentiation and integration for arbitrary bounds of integration.

## Integration

### Substitution

Basic procedure: Substitute to simplify, solve, then undo the substitution. Examples:

**Basic**

Solve \(\Int{x}{(2x+1)^3}{}{}\). Let \(u = 2x+1\). Then \(du = 2 dx \implies dx = \frac{1}{2} du\), and

\begin{align*} \Int{x}{(2x+1)^3}{}{} &= \Int{u}{u^3 \frac{1}{2}}{}{} \\ &= \frac{1}{2} \Int{u}{u^3}{}{} \\ &= \frac{1}{2} \cdot \frac{1}{4} u^4 + C \\ &= \frac{u^4}{8} + C \\ &= \frac{(2x+1)^4}{8} + C \\ \end{align*}

**Trigonometric Function**

Solve \(\Int{x}{\sin(x)\cos(x)}{}{}\). Let \(u = \sin(x)\). Then \(du = \cos(x) \,dx \implies dx = \cos^{-1}(x) \,du\), and

\begin{align*} \Int{x}{\sin(x)\cos(x)}{}{} &= \Int{u}{u \cos(x) \cos^{-1}(x)}{}{} \\ &= \Int{u}{u}{}{} \\ &= \frac{1}{2} u^2 + C \\ &= \frac{1}{2} \sin^2(x) + C \end{align*}

**Exponential Function**

Solve \(\Int{x}{e^{3x}}{}{}\). Let \(u = 3x\). Then \(du = 3 \,dx \implies dx = 3^{-1} \,du\), and

\begin{align*} \Int{x}{e^{3x}}{}{} &= \frac{1}{3} \Int{u}{e^{u}}{}{} \\ &= \frac{1}{3} e^u + C \\ &= \frac{1}{3} e^{3x} + C \end{align*}

**Rational Function**

Solve \(\displaystyle \Int{x}{\frac{2x}{x^2+1}}{}{}\).

Let \(u = x^2 + 1\). Then \(du = 2x \,dx \implies dx = \frac{1}{2x} \,du\), and

\begin{align*} \Int{x}{\frac{2x}{x^2+1}}{}{} &= \Int{u}{\frac{2x}{u} \cdot \frac{1}{2x}}{}{} \\ &= \Int{u}{\frac{1}{u}}{}{} \\ &= \ln \vert{u}\vert + C \\ &= \ln \vert{x^2+1}\vert + C \end{align*}

### Complex trigonometric substitution

Trig substitution can be used to simplify integrals involving square roots of quadratic expressions. The main forms where this technique is useful are \(\sqrt{a^2 - x^2}\), \(\sqrt{x^2 - a^2}\), and \(\sqrt{a^2 + x^2}\).

Illustrating with \(\sqrt{a^2 - x^2}\), substitute \(x=a\sin(\theta)\),
where \(-\frac{\pi}{2}\le\theta\le\frac{\pi}{2}\).

Then \(\d x=a\cos(\theta)\,\d\theta\).

Relevant trig identities:
\(\sin^2\theta + \cos^2\theta = 1\), and \(\cos2\theta = 2\cos^2\theta - 1\)

Example: \(\Int{x}{\sqrt{4-x^2}}{}{}\). Let \(x=2\sin\theta\).

\begin{align*} \Int{x}{\sqrt{4-x^2}}{}{} &= \Int{\theta}{\sqrt{2^2-(2\sin\theta)^2} \cdot 2\cos\theta}{}{} \\ &= \Int{\theta}{2\sqrt{1-\sin^2\theta} \cdot 2\cos\theta}{}{} \\ &= \Int{\theta}{2\sqrt{\cos^2\theta} \cdot 2\cos\theta}{}{} \\ &= \Int{\theta}{4\cos^2\theta}{}{} \\ &= \Int{\theta}{4 \left( \frac{1 + \cos 2\theta}{2} \right)}{}{} \\ &= 2 \Int{\theta}{1}{}{} + 2 \Int{\theta}{\cos 2\theta}{}{} \\ &= 2 \theta{} + 2 \frac{1}{2} \sin 2\theta + C\\ &= 2 \theta{} + \sin 2\theta + C \\ \end{align*}

Back-substituting \(x=2\sin\theta \implies \theta = \sin^{-1}\left(\frac{x}{2}\right)\):

\begin{align*} \Int{x}{\sqrt{4-x^2}}{}{} &= 2 \theta{} + \sin 2\theta + C \\ &= 2 \sin^{-1}\left(\frac{x}{2}\right) + 2\sin\theta\cos\theta + C \\ &= 2 \sin^{-1}\left(\frac{x}{2}\right) + x \cos\theta + C \\ &= 2 \sin^{-1}\left(\frac{x}{2}\right) + x \sqrt{1-\sin^2\theta} + C \\ &= 2 \sin^{-1}\left(\frac{x}{2}\right) + x \sqrt{1 - {\left( \frac{x}{2} \right)}^2 } + C \\ &= 2 \sin^{-1}\left(\frac{x}{2}\right) + x \sqrt{1 - \left( \frac{x^2}{4} \right) } + C \\ &= \boxed{ 2 \sin^{-1}\left(\frac{x}{2}\right) + \frac{x}{2} \sqrt{4-x^2} + C } \end{align*}

### Integration by parts

Integration by parts is a technique used to evaluate the integral of a product of two functions. The formula is derived from the product rule for differentiation.

Given two functions \(u\) and \(v\) that are differentiable on an interval, the integral of their product is given by

\[ \int u \,\d v = u v - \int v \,\d u \]

**Choosing \(u\) and \(\d v\)**

Choose \(u\) so that its derivative \(\d u\) is simpler than \(u\) itself, and choose \(\d v\) so that its integral \(v\) is easily found.

A common heuristic is the **LIATE** rule, which ranks functions in order of
preference for choosing \(u\):

- Logarithmic functions (\(\ln x\))
- Inverse trigonometric functions (\(\arcsin x\), \(\arccos x\), \(\arctan x\))
- Algebraic functions (e.g., \(x^n\))
- Trigonometric functions (e.g, \(\sin x\), \(\cos x\))
- Exponential functions (\(e^x\))