Data Science Use Cases for MATH7502



Back to main course website.


Here are 12 selected use cases celebrated in the course. Material for some of these is based on the main courses references [VMLS], [LALFD], [ILA], and [SWJ]. The exact section is described below.
  1. Clustering: Here we explore k-means clustering as in [VMLS - Chapter 4].

  2. Convergence proof for the perceptron: Here we prove that the classic perceptron algorithm converges when presented with a linearly separable dataset.

  3. Least squares data fitting: Here we explore how least squares is naturally used for data fitting as in [VMLS - Chapter 13].

  4. Least squares classification: Here we see how least squares can be tweaked for creating an entry level classification algorithm as in [VMLS - Chapter 14].

  5. Multi-objective least squares and regularization: Here we see how least squares problems can be modified for multiple objectives and most notably how ridge regression (Tikhonov regularization) is carried out and used. This follows [VMLS - Chapter 15].

  6. Multiple ways for evaluating least squares: Here we summarize the multiple computational ways in which one may optimize least squares. This follows [LALFD - Section II.2].

  7. Linear dynamical systems and systems of differential equations: Here we see how the matrix exponential is used to describe solutions of (continuous time) linear dynamical systems. This follows [VMLS - Chapter 9] for discrete time examples and then [ILA - Section 6.3] for continuous time.

  8. Covariance matrices and joint probabilities: Here we explore the basic second order descriptor of multi-dimensional randomness: the covariance matrix. This follows [LALFD - Section V.4].

  9. Multi-variate Gaussian distributions and weighted least squares: Here we consider multi-dimensional normal distributions as in [LALFD - Section V.5].

  10. Cholesky decomposition for multi-variate random variable simulation: Here we see how the Cholesky decomposition of a matrix can help in the Monte-Carlo generation of multi-variate random variables, especially multi-dimensional normals.

  11. Analysis of gradient descent and extensions: Here we see how certain toy examples can help analyze gradient descent and its variants. This follows [LALFD - Section V1.4].

  12. Principal component analysis (PCA): Here we explore PCA as obtained via Singular Value Decomposition (SVD) as in [LALFD - Section I.9].