Math 519–problems

Back to the main page

The following problems accompany the textbook (V.I.Arnold, Ordinary Differential Equations, 3rd edition, Springer Verlag, 1992.)

Contents

First Order Scalar Differential Equations

Direction fields, integral curves, and differential equations

  1. What is a direction field? What does $(1:2)$ stand for? What is the difference between a direction field and a vector field? Are the following equations True or False:
    1. $(1:2) = (-1:-2)$ ?
    2. $(1:2) = (2:4)$ ?
    3. $(1,2) = (2,4)$ ?
  2. On the left, below, is a direction field which is defined on the unit circle. Can it be extended to a continuous direction field on the whole region inside the unit circle?
  3. Above, on the right, is a vector field, which is defined on the unit circle. Can the vector field be extended to a continuous vector field defined on the region inside the circle?
  4. Consider the direction field on the unit circle, which, at the point $P=(\cos\theta, \sin\theta)$ is given by the direction $\ell_P = (\cos\frac\theta2 : \sin\frac\theta2)$.
    1. for any given point $P$, the angle $\theta$ is only defined up to a multiple of $2\pi$, so the formula $\ell_P = (\cos\frac\theta2 : \sin\frac\theta2)$ for the direction at the point $P$ could give different results depending on which particular $\theta$ is chosen. Show that this is actually not the case.
    2. Draw the direction field $\ell_P$ for all $P$ in the unit circle.
    3. Can one define a continuous vector field $\vv(x,y)$ on the unit circle such that $\ell_P$ is the direction of the vector $\vv(P)$?

Growth equations

Exponential growth equation, explosion equation, logistic curve, harvest quotas.
  1. Find the complete solution of the logistic equation $\dot x = x(1-x)$ (i.e. do problems 1 and 2 on page 25 of Arnold).
  2. Show that the equation $\dot x =\frac 1n x(1-x^n)$, where $n\ne0$ is a constant, can be reduced to the case $n=1$ by considering $y=x^n$. Find the solution, using the known solution to the logistic equation. Which differential equation do you get in the limit $n\searrow 0$?
  3. Consider the differential equation $\dot x = x(1-x^2)$.
    1. Draw the vector field and direction field (as in Figures 6 on page 19, and 12 on page 24 of the book).
    2. Find the solution of the differential equation.
  4. Consider the differential equation $\dot x = \sin\pi x$.
    1. Draw the vector field and direction field.
    2. Find the solution of the differential equation.
  5. Let $x(t), t\ge0$ be a (the?) solution to $\dot x = x e^{-x}$ with $x(0)=1$. Does $\lim_{t\to \infty} x(t)$ exist?
  6. Let $x(t)$ and $y(t)$ be solutions of $\dot x = x^2(1-x^2)$ and $\dot y = \sqrt{|y|}(1-y^2)$ respectively, and assume they are defined for $0\le t \le T$.
    1. True or false: if $-1\lt x(0)\lt 0$ then $-1\lt x(t) \lt 0$ for all $t\in[0,T]$?
    2. True or false: if $-1\lt y(0)\lt 0$ then $-1\lt y(t) \lt 0$ for all $t\in[0,T]$?

The existence and uniqueness theorems

Statement and proof of the main theorems. This is in the book (towards the end), but in lecture we will follow a simplified presentation for both existence and uniqueness.
  1. Let $x_0\in\R$ be any number. Find the solution of \[ \dot x = x(1-x), \quad x(0)=x_0. \]
    1. How many solutions are there when $x_0=0$ or $x_0=1$?
    2. What is the maximal interval on which the solution is defined? Pay careful attention to the cases $x_0\lt0$, $0\lt x_0\lt 1$, $x_0\gt 1$.
    1. Find a Lipschitz constant for the function $f(t, x) = x^2$ in the region $0\le t\le T$, $|x|\le M$.
    2. Find a Lipschitz constant for the function $f(t, x) = x(\sin(t)-x^2)$ in the region $0\le t\le 2\pi$, $|x|\le M$.
  2. Consider the function $f(x) = \sqrt{|x|}$.
    1. Is $f$ Lipschitz continuous in the region $|x|\le 1$?
    2. Let $\delta\in(0, 1)$ be some constant. Find a Lipschitz constant for the function $f$ in the region $\delta\le x\le 1$.
  3. Is the function $f(x) = x(1-\sqrt{|x|})$ Lipschitz continuous in the region $|x|\le 1$?
  4. Let $\delta\in(0, 1)$ be some constant. Find a Lipschitz constant for the function $f(x) = \{\delta +x^2\}^{1/4}$ in the region $|x|\le 1$.
  5. Does \[ f(t, x) = \begin{cases} x^2-x & 0\le t \le 1 \\ 2x & 1\le t\le 2 \end{cases} \] satisfy the Lipschitz condition in the existence and uniqueness theorem in the region $0\le t\le 2$, $|x|\le 1$?
  6. Let $x(t)$ and $y(t)$ be two solutions of \[ \dot x= f(t, x) \] on the interval $0\le t \le T$. Assume that $f$ satisfies the Lipschitz condition with Lipschitz constant $L$. Show that \[ |x(t) - y(t)| \ge e^{-Lt} |x(0) - y(0)| \] for $0\le t\le T$. Suggestion: follow the proof of the uniqueness theorem and consider the quantity $e^{+2Lt} \bigl(x(t)-y(t)\bigr)^2$—show that it is nondecreasing.
  7. (Continuation of the previous problem.) Suppose you are using a computer to approximate solutions of a differential equation of the form $\dot x = f(t,x)$, where the function $f$ is Lipschitz with Lipschitz constant $L=15$. Your computer cannot distinguish between numbers that differ by less than 10-14.
    1.  Suppose you have two different solutions $x(t)$ and $y(t)$ of your differential equation, which satisfy $y(0)-x(0) = 1$. The uniqueness theorem implies that those solutions satisfy $x(t)\neq y(t)$ for all $t\ge 0$. What is the smallest value of $t\gt0$ for which your computer may not be able to distinguish between $x(t)$ and $y(t)$? (i.e. the computer will say that all the decimals it knows of $x(t)$ and $y(t)$ are the same; assume this happens when $y(t)-x(t)\le 10^{-14}$.)
    2.  Suppose you have two solutions for which $y(0) = x(0) + 10^{-14}$. What is the longest time interval $0\lt t\lt T$ on which you can guarantee that the difference between the solutions $x(t)$ and $y(t)$ is no more than 0.1? Interpretation: you are trying to compute the solution of a differential equation, but you only know the initial value in 14 decimals. The solution you compute therefore will have some error, caused by the uncertainty in the initial value. How long does it it take (at least) for this error in the solution to grow to an appreciable number, e.g. $0.1$?
  8. For each of the following initial value problems decide if the solution exists for all $t\ge 0$, or only on a finite time interval $0\le t\lt T$.
    1.  $\dot x = x(1-x)$, $x(0) = \frac12$
    2.  $\dot x = -x(1-x)$, $x(0) = \frac12$
    3.  $\dot x = -x(1-x)$, $x(0) = 2$
    4.  $\dot x = x^2(1-x^5)$, $x(0) = \frac12$
    5.  $\dot x = x(2+\sin(t)-x)$, $x(0) = 1$
    Note that for some of these problems $x_*(t)=0$ and/or $x_\dagger(t)=1$ are a special solution. You can use this fact together with the uniqueness theorem to conclude that the solution you are looking at has to stay in some region of the $(t,x)$ plane.

    You can find a summary of Friday’s lecture here

The Variational Equation

See the summary of the theorem on parameter dependence.
  1. Find and solve the variational equation in the following cases.
    1. $\dot x = x(1-x)$, $x(0)=\alpha$, at the given solution $\bar x(t) = 1$.
    2. $\dot x = x(1-x^3)$, $x(0)=\alpha$, at the given solution $\bar x(t) = 1$.
    3. $\dot x = x(1-x)^3$, $x(0)=\alpha$, at the given solution $\bar x(t) = 1$.
    4. $\dot x = \alpha x$, $x(0)=1$, at the given solution $\bar x(t)=e^{t}$, $\alpha=1$.
    5. $\dot x = -\alpha x^2$, $x(0)=1$, at the given solution $\bar x(t)=1/(1+t)$, $\alpha=1$.
  2. The logistic equation with harvesting. A fish population in a large lake, left to itself, would grow according to $\dot x = x(1-x)$. In addition the natural birth and death in the population, fish are also being caught at a rate $h(t)$ (the “harvesting rate”). Thus the differential equation for the fish population which takes harvesting into account is \[ \dot x = x(1-x) - h(t). \] In the following questions you investigate the effect of a small amount of harvesting on a steady fish population.
    1. “Steady harvesting.” $\dot x = x(1-x) - \alpha$, $x(0) = 1$, at the given solution $\bar x(t) = 1$, and at $\alpha=0$.
    2. “Periodic harvesting.” $\dot x = x(1-x) - \alpha\sin \bigl(\frac{2\pi}{T} t\bigr)$, $x(0) = 1$, at the given solution $\bar x(t) = 1$, and at $\alpha=0$. Here $T\gt0$ is a positive constant (it may be simpler to abbreviate $k=2\pi/T$.)
  3. Read the example where a variational equation is derived and solved. The result of that computation is that for small $\alpha$ the solution $x(t, \alpha)$ of \[ \dot x = x(1-x) +\alpha\sin kt, \qquad x(0)=0 \] is approximately given by \[ x(t, \alpha) = \alpha \frac{ke^t -\sin kt-k\cos kt}{1+k^2} + \dots \] Note that the solution contains the term $ke^t$: this term suggests that as $t\to\infty$ the solution $x(t, \alpha)$ grows exponentially, i.e. $\lim_{t\to\infty} x(t,\alpha)=\infty$.

    Is this conclusion valid or not?

The Implicit Function Theorem, Bifurcations

See the summary of the Implicit Function Theorem and its application to bifurcation problems.
  1. Draw the bifurcation diagrams for the differential equation $\dot x = f(a, x)$ for the following right hand sides $f$. In each case find all bifurcation points and determine if they are “standard bifurcations” or not.
    1. $f(a, x) = x^2 - a$
    2. $f(a, x) = x^2 - ax$
    3. $f(a, x) = x^3 - ax$
    4. $f(a, x) = x^4 - ax$
    5. $f(a, x) = x^3-ax+1$
    6. $f(a, x) = \sin(x)-ax$
    7. $f(a, x) = ax-x^3+x^5$ (see also problem 3, page 46/47).
  2. Assume that $(x_0,a_0)$ is a fold point in the bifurcation diagram for the differential equation $\dot x = f(x,a)$. Thus $f=f_x=0$ and $f_a\ne 0$, $f_{xx}\ne 0$ at $(x_0,a_0)$. Let the bifurcation set near $(x_0, a_0)$ be given by $a=a(x)$. Find an expression for $a'''(x_0)$ in terms $f$ and its derivatives at $(x_0,a_0)$.
  3. $f(a, b, x) = x^2-ax-b$ (the bifurcation diagram in this problem is three dimensional).
  4. $f(a, b, x) = x^3-ax-b$ (the bifurcation diagram in this problem is three dimensional).
  5. $f(a, b, x) = x^4-ax^2-b$ (the bifurcation diagram in this problem is three dimensional).
  6. The Implicit Function Theorem has the following sibling:
    Inverse Function Theorem. Let $y=f(x)$ be a differentiable function that is defined on some interval $a\lt x\lt b$. Suppose that for some $x_0\in(a,b)$ one has $f'(x_0)\neq0$. Then there is a small $\epsilon\gt0$ such that $f$ is one-to-one on the interval $(x_0-\epsilon, x_0+\epsilon)$. The inverse $x=g(y)$ of $f$ is a differentiable function.
    Prove this by applying the Implicit Function Theorem to the function $F(x,y) = f(x)-y$. You get the inverse $x=g(y)$ by solving the equation $F(x, y)=0$ for $x$.

Poincaré maps

  1. Compute the Poincaré map $\phi(a)$ for the following equations. Find the fixed point(s), and decide if they are asymptotically stable. (note that all the following equations are linear equations of the form $\dot x = P(t)x+Q(t)$ —see the review of linear diffeqs.)
    1. $\dot x = kx$, with period $T=1$.
    2. $\dot x = kx$, with arbitrary period $T\gt0$.
    3. $\dot x = \sin(t)x$, with period $T=2\pi$.
    4. $\dot x = 1 -\sin^2(t)x$, with period $T=2\pi$.
  2. The (separable) differential equation $\dot x = k\sin(t) x^2$ is periodic in time with period $T=2\pi$. For which $a\in\R$ is the Poincaré–map $\phi(a)$ defined? Compute $\phi(a)$ when it is defined.
  3. Consider a differential equation $\dot x = f(t,x)$ where $f(t+T, x) = f(t,x)$ for all $(t,x)$, and assume the existence and uniqueness theorems apply to $f$. Suppose that $x(t)$ is a solution which is periodic with period $2T$, i.e. $x(t+2T) = x(t)$. Use the properties of the Poincaré–map to show that $x(t)$ also is $T$ periodic.
  4. Time dependent linear equation.  Consider the system \[ \dot x = A + \cos(\omega t) - x, \] where $\omega \gt 0$ and $A \gt 1$ are constants
    1. Show that the equation is time periodic, and compute its period $T$.
    2. How does $x$ change after time $T$? (i.e. compute the Poincaré map).
    3. Show that there is a solution $x(t)$ such that $x(t+T) = x(t)$.
  5. Time dependent logistic equation.  Consider the system
    \begin{equation} \dot x = x (A + \cos(\omega t) - x), \tag{Lg} \end{equation}
    where $\omega,A \gt 0$ are constants.
    1. If $\phi$ is the Poincaré map, then show that for $a \gt A+1$ one has $\phi(a) \lt a$.
    2. Show that $\phi(0)=0$, and compute $\phi'(0)$. Is $x=0$ a stable fixed point of the Poincaré–map?
    3. Show that our system (Lg) has at least one periodic solution other than $x(t)=0$.
  6. Read the explosive growth example.
    1. Draw the graph of the Poincaré map $\phi$.
    2. There is a time periodic solution, namely, $x(t)=0$. Assume (as in the example) that the period is $T=1$. Does the stability test apply to this fixed point? Is it asymptotically stable?
  7. Let $x(t)$ be the solution of \[ \dot x = a(t)x+b(t), \] on some time interval $0\le t\le T$, where $a(t)$ and $b(t)$ are continuous functions, with $b(t)\ge 0$ for all $t\in[0, T]$. Assuming that $x(0)\ge 0$ show that $x(t)\ge 0$ for all $t\in[0,T]$. (Hint: carefully rework the solution of linear equations so as to turn all integrals into definite integrals—see also this page. The problem with indefinite integrals is that you can’t tell if they are positive or not because they contain an indefinite constant: for example, $\int xdx = \frac12x^2+C$ so it depends on $C$ and $x$ if $\int xdx \gt0$ or not; on the other hand $\int_0^x tdt = \frac12x^2$, and this is always nonnegative.)
  8. Let $\phi$ be the Poincaré map for the differential equation \[ \dot x = x(1-x) - h(t), \] where the “harvesting rate” $h$ is a continuous and $T$–periodic function of time.
    1. Let $x(t, a)$ be the solution with initial value $x(0, a) = a$. Find the variational equation for $y = \frac{\pd x}{\pd a}$, and then find the second variational equation for $z = \frac{\pd^2 x}{\pd a^2}$ by differentiating the first variational equation with respect to $a$ again.
    2. Express $\phi'(a)$ and $\phi''(a)$ in terms of $y$ and $z$.
    3. Show that $\phi''(a)\lt0$ for all $a$
    4. Show that the differential equation can not have more than two periodic solutions.
  9. Consider the differential equation \[ \dot x = x(1-x) (x-c(t)), \] where $c(t)$ is a continuous $T$ periodic function of time, and where $0\lt c(t)\lt 1$ for all $t$.
    1. First consider the case where $c(t)=c$ does not depend on time, so the equation is autonomous. Draw the vector field, its fixed points, and determine their stability as in the growth problem with the cubic right hand side..
    2. Next consider the case in which $c(t)$ is a continuous function of which we only know that it is $T$–periodic, and that it satisfies $0\lt c(t)\lt 1$ for all $t$. Note that both $x_0(t) = 0$ and $x_1(t) = 1$ are solutions of the diffeq. Compute $\phi'(0)$ and $\phi'(1)$, and determine the stability or instability of these solutions.
    3. Use the information you just found about $\phi$ at $a=0$ and $a=1$ to show that there should be at least one more periodic solution, other than $x_0(t)=0$ and $x_1(t)=1$.

Linear Systems

Matrix algebra review; Eigenvalues and eigenvectors

  1. Let $A, B$ be $n\times n$ matrices, and let $I$ be the $n\times n$ identity matrix. Are the following True or False? Remember: true means “always true,” i.e. for any choice of matrices $A,B$. For those statements that are false provide an example showing why they are not always true; for the true statements provide a reason.
    1. $A+B=B+A$ ?
    2. $AB = BA$ ?
    3. $(A+B)^2 = A^2 + 2AB + B^2$ ?
    4. $(I+A)^2 = I + 2A + A^2$ ?
    5. If $A\vv=B\vv$ for some vector $\vv\in\R^n$ then $A=B$ ?
    6. If $A\vv=B\vv$ for all vectors $\vv\in\R^n$ then $A=B$ ?
  2. Suppose that $A$ and $B$ are $n\times n$ matrices that commute, i.e. for which $AB=BA$.
    1. Suppose that $A$ is invertible. Show that $A^{-1}B=BA^{-1}$.
    2. If $\vv$ is an eigenvector with eigenvalue $\lambda$ for the matrix $A$, and if $B\vv\neq0$, then show that $B\vv$ also is an eigenvector, also with eigenvalue $\lambda$ for $A$.
  3. Let $A$ be a $5\times 5$ matrix, and let $\vv$, $\vw$ be eigenvectors of $A$ with eigenvalues $\lambda=2$, and $\mu=-3.$ Let $I$ be the $5\times 5$ identity matrix.
    1. Can $\vv=0$? What about $\vw$? Explain.
    2. Compute $A(2\vv-2\vw)$, $A^2(2\vv-2\vw)$, and $A^3(2\vv-2\vw)$
    3. Compute $(A+I)(2\vv-2\vw)$.
    4. Compute $(A+I)^2(2\vv-2\vw)$.
    5. Compute $(A-2I)(A+3I)(2\vv-2\vw)$.
    6. Suppose $s\vv+t\vw=0$ for certain numbers $s, t\in\R$. Show directly that $s=t=0$ by expanding $A(s\vv+t\vw)$.
    7. Are $\vv, \vw $ linearly independent? Explain.
    8. Is $\{\vv, \vw\}$ a basis for $\R^5$? Explain.
  4. Let $A$ be a $3\times3$ matrix with eigenvalues $\lambda=1$, $\mu=-1$, $\nu=2$, and corresponding eigenvectors $\vu, \vv, \vw$.
    1. Are $\vu, \vv, \vw$ linearly independent? Explain.
    2. Is $\{\vu, \vv, \vw\}$ a basis for $\R^3$? Explain.
    3. Explain why any vector $\vx$ can be written as a linear combination of $\vu, \vv, \vw$.
    4. Solve the equation $A\vx = 2\vu+3\vv-\vw$ for $\vx$ (suggestion: look for a solution $\vx$ that is a linear combination of $\vu$, $\vv$, and $\vw$.)

The exponential of a matrix

  1. Compute $e^{tA}$ for the following matrices. In each case show which system of linear homogeneous differential equations is solved this way.
    1. $A = \begin{pmatrix} 0 & 2 \\ \tfrac12 & 0 \end{pmatrix} $
    2. $A = \begin{pmatrix} 0 & -b \\ a & 0 \end{pmatrix} $ where $a$ and $b$ are positive constants.
    3. $A = \begin{pmatrix} c& 0 & 0 \\0& 0 & -b \\ 0& a & 0 \end{pmatrix} $ where $a,b$, and $c$ are constants with $a, b\gt0$.
    4. $A = \begin{pmatrix} 0 & -b &0&0\\ a & 0 &0&0 \\ 0&0&0&-c \\ 0&0&d&0 \end{pmatrix} $ where $a,b,c$, and $d$ are positive constants.
    5. $A = \begin{pmatrix} 0 & 1 \\ 0& 0 \end{pmatrix} $
    6. $A = \begin{pmatrix} 0 & a & c \\ 0& 0 & b \\ 0&0&0 \end{pmatrix} $ where $a,b,c$ are constants.
    7. $A = \begin{pmatrix} 0 &0 & -a & 0 \\ 0& 0 & 0&-a \\ b&0&0&0 \\ 0&b&0&0 \end{pmatrix} $ where $a$ and $b$ are constants.
    8. $A = \begin{pmatrix} 0 & -b \\ a & 0 \end{pmatrix} $ where $a$ and $b$ are positive constants.
    9. $A = \begin{pmatrix} \alpha & 2 \\ \tfrac12 & \alpha \end{pmatrix} $ where $\alpha\in\R$ is a constant.
    10. $A = \begin{pmatrix} 1 & 0 & 0 \\ 0&-1&0 \\ 0&0&-2 \end{pmatrix} $.
    11. $A = \begin{pmatrix} 0 & 1 & 0 \\ 0&0&1 \\ 1&0&0 \end{pmatrix} $.
    1. Suppose that for some $n\times n$ matrix $A$ and some vector $\vv\in\R^n$ we know that $e^{tA}\vv = \vv$ for all $t\in\R$. Show that $A\vv=0$. (Hint: differentiate the given identity $e^{tA}\vv=\vv$ with respect to time.)
    2. Find a $2\times 2$ matrix $A\neq 0$ such that $e^{2\pi A}\vv = \vv$ for all $\vv\in\R^2$.
  2. Let $A$ be an $n\times n$ matrix, and let $\vv\in\R^n$ be an eigenvector of $A$, i.e. $A\vv=\lambda \vv$. Use the definition of the matrix exponential in terms of a series to compute $e^{tA}\vv$. (Hint: how much is $A^2\vv$? $A^n \vv$?)
  3. Suppose that for some $n\times n$ matrix $A$ and some vector $\vv\in\R^n$ we know that $e^{tA}\vv = e^{-t}\vv$ for all $t\in\R$. Show that $A\vv=-\vv$.
  4. Prove the first two properties of the matrix norm. (The third one was done in lecture). I.e. show that for any pair of matrices $A, B$ and for any real number $t$ one has \[ \|A+B\|\le \|A\|+\|B\|, \qquad \|tA\| = |t|\; \|A\|, \] where the norm $\|A\|$ is defined to be the maximum row sum of the matrix $A$.
  5. In this problem $\|A\|$ is the maximal row sum of the matrix $A$ (click here for the definition.)
    1. Prove that for any vector $\vx\in\R^n$ and any matrix $A$ one has \[ \|A\vx\|_\max \leq \|A\|\; \|\vx\|_\max. \]
    2. Find a $2\times2$ matrix $A$ and a vector $\vx$ such that \[ \|A\vx\|_\max \lt \|A\|\; \|\vx\|_\max. \]
    3. Let $A$ be any $n\times n$ matrix. Show that there is a vector $\vx$ such that \[ \|A\vx\|_\max = \|A\|\,\|\vx\|_\max. \] Hint: if all the entries of $A$ are positive, then consider $\vx=(1, 1, \dots, 1)$.
  6. To prove existence of a solution to \[ \frac{d\vx}{dt} = A\vx, \qquad X(0)= C \] via Picard iteration one defines a sequence of functions $\vx_k(t)$ ($k=1,2,\dots$) by \begin{align*} \vx_0(t) &= C, \\ \frac{d\vx_k}{dt} &= A\vx_{k-1}(t), \quad \vx_k(0) = C. \end{align*} Compute $\vx_k(t)$ for all $k$ and $t$.
  7. Remember that $\mathrm{Tr}(A)$ is the trace of the matrix $A$.
    1. Prove $\displaystyle\det e^{A} = e^{\mathrm{Tr}(A)}$ (you may assume that $A$ has a basis of eigenvectors $V_1$, …, $V_n$).
    2. Does there exist a (real) matrix $A$ for which $\displaystyle e^A = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} $?
  8. Let $\vv\in\R^3$ be a vector of (Euclidean) length $\omega$. In this problem you will solve the system of differential equations $\dot\vx = \vv\times \vx$, where “$\times$” is the usual cross product in $\R^3$.
    Let $\vw_1$ and $\vw_2$ be a pair of unit vectors that are perpendicular to $\vv$, and to each other. Choose $\vw_1, \vw_2$, so that $\{\vw_1, \vw_2, \vv\}$ satisfies the right hand rule ($\vw_1\times\vw_2=\vv/\|\vv\|$).
    1. Let $\vx= x_1\vw_1+x_2\vw_2+x_3\vv$ be any vector and write $\vv\times \vx$ in terms of $\{\vw_1, \vw_2, \vv\}$, i.e. find $y_1,y_2,y_3$ such that \[ \vv\times\vx = y_1\vw_1+y_2\vw_2+y_3\vv. \] (Suggestion: make a drawing and first compute $\vv\times \vw_1$, $\vv\times \vw_2$, $\vv\times \vv$; then compute $\vv\times \vx$.)
    2. Expand the vector $\vx$ in terms of $\{\vw_1, \vw_2, \vv\}$, i.e. set $\vx(t) = x_1(t)\vw_1+x_2(t)\vw_2+x_3(t)\vv$, and write the system of differential equations that the coefficients $x_1(t)$, $x_2(t)$, and $x_3(t)$ satisfy if $\dot\vx = \vv\times\vx$.
    3. Solve the resulting system of differential equations by writing it in the form $\dot\vx = A\vx$, and by computing $e^{tA}$.
    4. Give a geometric interpretation of $e^{tA}\vx_0$.
  9. True or false? Which (if any) of the following statements are true for any pair of $n\times n$ matrices $A$ and $B$:
    1. $e^{A}e^{B} = e^{B}e^{A}$;
    2. $e^{A+B} = e^{A} e^{B}$.
    3. $e^{-A} = \bigl(e^{A}\bigr)^{-1}$
    4. $B^{-1}e^{A}B = e^{B^{-1}AB}$
  10. Given two matrices $A$ and $B$, find a matrix $C$ such that $e^{-B} e^A e^B = e^C$. Hint: note that $e^{-B} = \bigl(e^B\bigr)^{-1}$ and use the last question above.
  11. Consider the matrix \[ A = \begin{pmatrix} -1&1&0 \\ 0&-1&1 \\ 0&0&-1 \end{pmatrix} \]
    1. Find the eigenvalues of $A$.
    2. Compute $e^{tA}$
    3. Compute the maximum row sum of $e^{tA}$, i.e. $\|e^{tA}\|$.
    4. For which $\delta\in\R$ is it true that there is a $C\in(0, \infty)$ such that $\|e^{tA}\|\leq Ce^{\delta t}$ for all $t\geq 0$?
  12. Let $V$ be any invertible matrix. Which of the following are always true:
    1. $\|V^{-1}\| \, \|V\| = 1$
    2. $\|V^{-1}\| \, \|V\| \le 1$
    3. $\|V^{-1}\| \, \|V\| \ge 1$
    4. None of the above.