Linear Algebra & Applications

2201NSC

Week 9 - Factorisations

Problem 1

  1. Prove that the inverse of a diagonal matrix must be diagonal matrix.
  2. Prove that the inverse of a lower-triangular matrix must be lower-triangular.
  3. Prove that the inverse of an upper-triangular matrix must be upper-triangular.





Problem 1 (cont.)

  1. Prove that the inverse of a diagonal matrix must be diagonal matrix.










Problem 2

Which of the factorisations we have considered (PLU, QR, Cholesky, SVD) has the following properties or requirements?

  1. Produces an orthogonal matrix?
  2. Produces an upper or lower triangular matrix?
  3. Can be applied to non-square matrices?
  4. Must be applied to real symmetric matrices?
  5. Provides an orthogonal basis for the column space?






Problem 3

Use software to calculate the PLU, QR, Cholesky and SVD factorisations for the following matrices, if possible. If not possible, indicate why the method cannot be applied.

(a) $\,A=\begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix}$ (b) $\,A=\begin{pmatrix} 8 & 1 & 6 \\ 3 & 5 & 7\\ 4 & 9 & 2 \end{pmatrix}$
(c) $\,A=\begin{pmatrix} 6 & 1 & -4 \\ 1 & 6 & -5 \\ -4 & -5 & 6 \end{pmatrix}$ (d) $\,A=\begin{pmatrix} 4 & 0 & 0 \\ -1 & 4 & 1\\ 0 & 0 & 4 \end{pmatrix}$
(e) $\,A=\begin{pmatrix} 3 & -1 \\ 1 & -1 \\ 2 & 3 \end{pmatrix}$ (f) $\,A=\begin{pmatrix} 3 & -1 & 1 & 2 \\ 1 & -1 & 0 & -2 \\ 2 & 0 & 1 & 4 \end{pmatrix}$


Problem 3: Python code for part a)

PLU

						
import numpy as np
from scipy.linalg import lu, qr, svd, cholesky
 
A = np.array([[0, -1],
              [1,  -1]])
 
print("--- LDU decomposition ---")
P, L, U = lu(A)  # SciPy: returns P, L, U
print("P:\n", P)
print("L:\n", L)
print("U:\n", U)
print("Check P*L*U:\n ", P @ L @ U)
  

👉 Run Python online here: tinket.io/python3

Note: The second line imports specific linear algebra routines form scipy.linalg. More details here: Linear algebra


Problem 3: Python code for part a)

QR

						
import numpy as np
from scipy.linalg import lu, qr, svd, cholesky

A = np.array([[0, -1],
              [1,  -1]])

print("\n--- QR decomposition ---")
Q, R = qr(A)

print("Q:\n", Q)
print("R:\n", R)
print("Check Q*R:\n", Q @ R)
  


Problem 3: Python code for part a)

Cholesky

						
import numpy as np
from scipy.linalg import lu, qr, svd, cholesky

A = np.array([[0, -1],
              [1,  -1]])

print("\n--- Cholesky decomposition ---")
try:
    Rchol = cholesky(A, lower=False)
    print("Rchol:\n", Rchol)
except Exception as e:
    print("Cholesky failed:", e)
  

Note: The try/except statements check if it is possible to compute the Cholesky decomposition. More info here: Handling Exceptions


Problem 3: Python code for part a)

SVD

						
import numpy as np
from scipy.linalg import lu, qr, svd, cholesky

A = np.array([[0, -1],
              [1,  -1]])

print("\n--- SVD decomposition ---")
U_svd, S_svd, Vh_svd = svd(A)
print("U:\n", U_svd)
print("S (as diag):\n", np.diag(S_svd))
print("V^T:\n", Vh_svd)
print("Check U*S*V^T:\n", U_svd @ np.diag(S_svd) @ Vh_svd)
  

Remark 1: For part f) you may need to modify code.

Remark 2: For all cases, if you prefer to set a specific number of decimals, after the scipy routines are imported, include this line of code:

np.set_printoptions(precision=5, suppress=True)


Problem 4

For any of the SVD factorisations calculated above, compare the SVD factorisation of $A$ with the matrices obtained from diagonalisation of $A^T A$ and $A A^T$.










Problem 4: Python code
						
import numpy as np

# Set NumPy printing precision
np.set_printoptions(precision=5, suppress=True)

# Define matrices
A = np.array([[0, -1], 
              [1, -1]])
              
# SVD
U, S, Vt = np.linalg.svd(A)
print("U =\n", U, "\n")
print("S (singular values) =", S, "\n")
print("V^T =\n", Vt, "\n")
    
# Eigenvalues of A A^T
eigvals_AAt, Ueig = np.linalg.eig(A @ A.T)
print("Eigenvalues of A A^T =", eigvals_AAt)
print("Real sqrt of eigenvalues =", np.sqrt(np.real(eigvals_AAt)), "\n")
    
# Eigenvalues of A^T A
eigvals_AtA, Veig = np.linalg.eig(A.T @ A)
print("Eigenvalues of A^T A =", eigvals_AtA)
print("Real sqrt of eigenvalues =", np.sqrt(np.real(eigvals_AtA)))
  



Problem 5

Solve the least-squares problem in 5.7.2, for the data provided in that section, using the QR factorisation approach. Solve for a fit to the following equations:

  1. $y=a e^{-x} + b e^{-x/2} + c$
    (see 5.7.2 for the solution using the direct method)
  2. $y=ax^3 + bx^2 + cx + d$





Problem 5 (cont.)

  1. $y=a e^{-x} + b e^{-x/2} + c\;\;$ (see 5.7.2 for direct method)










Python code for part (b)
						
import numpy as np
from scipy.linalg import qr
import matplotlib.pyplot as plt

# Set NumPy printing precision
np.set_printoptions(precision=5, suppress=True)

# Data
x = np.array([1,2,3,4,5,6,7,8,9,10], dtype=float)
y = np.array([0.6,2.1837,2.8844,4.3985,4.2369,4.7755,5.5160,5.6721,5.6654,4.7268],dtype=float)

X_cubic = np.column_stack([x**3, x**2, x, np.ones_like(x)]) 

Q, R = qr(X_cubic, mode='economic') 
R_inv = np.linalg.inv(R)
coef_cubic = R_inv @ Q.T @ y
print("Coefficients (a3, a2, a1, a0)^T = \n", coef_cubic)
  



Python code for part (b)
						
# Predictions
x_fit = np.linspace(1, 10, 200)
y_cubic_fit = coef_cubic[0]*x_fit**3 + coef_cubic[1]*x_fit**2 + coef_cubic[2]*x_fit + coef_cubic[3]

# ---- Plot ----
plt.figure(figsize=(8,6))
plt.scatter(x, y, color="black", label="Data", zorder=5)

plt.plot(x_fit, y_cubic_fit, "b--", label="Cubic fit")

plt.xlabel("x")
plt.ylabel("y")
plt.title("Least-Squares Approximations")
plt.legend()
plt.grid(True)
plt.show()
  



Python code for part (b)
						
import numpy as np
from scipy.linalg import qr
import matplotlib.pyplot as plt

# Set NumPy printing precision
np.set_printoptions(precision=5, suppress=True)

# Data
x = np.array([1,2,3,4,5,6,7,8,9,10], dtype=float)
y = np.array([0.6,2.1837,2.8844,4.3985,4.2369,4.7755,5.5160,5.6721,5.6654,4.7268], dtype=float)

X_cubic = np.column_stack([x**3, x**2, x, np.ones_like(x)]) # Create matrix

Q, R = qr(X_cubic, mode='economic') # mode='economic' reshapes Q and R
R_inv = np.linalg.inv(R)
coef_cubic = R_inv @ Q.T @ y
print("Coefficients (a3, a2, a1, a0)^T = \n", coef_cubic)

# Predictions
x_fit = np.linspace(1, 10, 200)
y_cubic_fit = coef_cubic[0]*x_fit**3 + coef_cubic[1]*x_fit**2 + coef_cubic[2]*x_fit + coef_cubic[3]

# ---- Plot ----
plt.figure(figsize=(8,6))
plt.scatter(x, y, color="black", label="Data", zorder=5)
plt.plot(x_fit, y_cubic_fit, "b--", label="Cubic fit")
plt.xlabel("x")
plt.ylabel("y")
plt.title("Least-Squares Approximations")
plt.legend()
plt.grid(True)
plt.show()