Skip to content

Eigenvalues and Eigenvectors

🚀 Eigenvalues & Eigenvectors: Finding the Invariants

Eigenvalues and eigenvectors are properties of square matrices that reveal the “skeleton” of a linear transformation. They describe the directions along which a transformation only scales the space, rather than rotating it.


🟢 Level 1: Core Definitions

1. The Eigenvalue Equation

For a square matrix A\mathbf{A}, a non-zero vector v\mathbf{v} is an eigenvector if the transformation of v\mathbf{v} by A\mathbf{A} results in a vector that is a scalar multiple of v\mathbf{v}: Av=λv\mathbf{A}\mathbf{v} = \lambda\mathbf{v} Where:

  • v\mathbf{v} is the eigenvector.
  • λ\lambda (lambda) is the eigenvalue.

2. Characteristic Equation

To find the eigenvalues of A\mathbf{A}, we solve the characteristic equation: det(AλI)=0\det(\mathbf{A} - \lambda\mathbf{I}) = 0 This is a polynomial in λ\lambda of degree nn (where nn is the dimension of A\mathbf{A}).

import numpy as np

# Define a 2x2 matrix
A = np.array([[4, -2], [1, 1]])

# Calculate eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)

print(f"Eigenvalues: {eigenvalues}")
print(f"Eigenvectors (columns):\n{eigenvectors}")

🟡 Level 2: Diagonalization and Eigendecomposition

3. Eigendecomposition

If a square matrix A\mathbf{A} has nn linearly independent eigenvectors, it can be factored into: A=VΛV1\mathbf{A} = \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1} Where:

  • V\mathbf{V} is a matrix whose columns are the eigenvectors of A\mathbf{A}.
  • Λ\mathbf{\Lambda} (Lambda) is a diagonal matrix containing the corresponding eigenvalues.

4. Geometric Interpretation

Eigenvectors represent the “principal axes” of a transformation. In a transformation that stretches an image, the eigenvectors point in the directions of the stretch, and the eigenvalues indicate the magnitude of that stretch.


🔴 Level 3: Principal Component Analysis (PCA)

5. Variance and Eigenvectors

PCA is a technique for dimensionality reduction that identifies the directions of maximum variance in a dataset. These directions are the eigenvectors of the covariance matrix of the data.

  1. Center the data: Subtract the mean from each feature.
  2. Compute Covariance: C=1n1XTX\mathbf{C} = \frac{1}{n-1} \mathbf{X}^T \mathbf{X}.
  3. Eigendecomposition: Find eigenvalues and eigenvectors of C\mathbf{C}.
  4. Project: Choose the eigenvectors with the largest eigenvalues to represent the data in a lower-dimensional space.
# Simple PCA workflow concept
from sklearn.decomposition import PCA
import numpy as np

X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])

# Reduce to 1 dimension
pca = PCA(n_components=1)
X_reduced = pca.fit_transform(X)

print(f"Original shape: {X.shape}")
print(f"Reduced shape: {X_reduced.shape}")
print(f"Explained variance ratio: {pca.explained_variance_ratio_}")