Difference between revisions of "MAT2253"
(→List of Topics: Edited by Vu Hoang, Stephen Peña, Juan B. Gutiérrez) |
Stephen.pena (talk | contribs) |
||
(One intermediate revision by one other user not shown) | |||
Line 3: | Line 3: | ||
Prerequisite: [[MAT1214]]/[[MAT1213]] Calculus I | Prerequisite: [[MAT1214]]/[[MAT1213]] Calculus I | ||
− | This comprehensive course in linear algebra provides an in-depth exploration of core concepts and their applications to optimization, data analysis, and neural networks. Students will gain a strong foundation in the fundamental notions of linear systems of equations, vectors, and matrices, as well as advanced topics such as eigenvalues, eigenvectors, and canonical solutions to linear systems of differential equations. The course also | + | This comprehensive course in linear algebra provides an in-depth exploration of core concepts and their applications to optimization, data analysis, and neural networks. Students will gain a strong foundation in the fundamental notions of linear systems of equations, vectors, and matrices, as well as advanced topics such as eigenvalues, eigenvectors, and canonical solutions to linear systems of differential equations. The course also explores he critical techniques of calculus operations in vectors and matrices, optimization, and Taylor series in one and multiple variables. By the end of the course, students will have a thorough understanding of the mathematical framework underlying principal component analysis, gradient descent, and the implementation of simple neural networks. |
+ | |||
+ | The primary textbook is "Mathematics for Machine Learning" by Deisenroth, Faisal, and Ong, 2020, Cambridge University Press. The book is available for free for personal use at https://mml-book.github.io/book/mml-book.pdf | ||
+ | |||
+ | The secondary textbook is "Pattern Recognition and Machine Learning" by Bishop, 2006, Springer Information Science and Statistics. The book is available for free for personal use at https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 14: | Line 18: | ||
| 3 || 2.3 || Solving systems of linear equations || || | | 3 || 2.3 || Solving systems of linear equations || || | ||
|- | |- | ||
− | | 4 || | + | | 4 || 2.4 || Vector spaces || || |
|- | |- | ||
− | | 5 || | + | | 5 || 2.5 ||Linear Independence || || |
|- | |- | ||
− | | 6 || 2. | + | | 6 || 2.6 || Basis & Rank || || |
|- | |- | ||
− | | 7 || | + | | 7 || Exam 1 || || || |
|- | |- | ||
− | | 8 || 2. | + | | 8 || 2.7 || Linear Mappings || || |
|- | |- | ||
− | | 9 || 2.7 || Linear Mappings || || | + | | 9 || 2.7 || Linear Mappings (examples) || || |
|- | |- | ||
− | | 10 || | + | | 10 || 3.1, 3.2, 3.3 || Norms, Inner Products, Lengths & Distances || || |
|- | |- | ||
− | | 11 || 4 | + | | 11 || 3.4 || Angles & orthogonality || || |
|- | |- | ||
− | | 12 || | + | | 12 || 3.5 || Orthonormal Basis || || |
|- | |- | ||
− | | 13 || 3. | + | | 13 || 3.7 || Inner Product of Functions || || |
|- | |- | ||
− | | 14 || | + | | 14 || Project 1 || || || |
|- | |- | ||
− | | 15 || | + | | 15 || 4.1 || Determinant and Traces || || |
|- | |- | ||
− | | 16 || | + | | 16 || 4.2 || Eigenvalues & Eigenvectors || || |
|- | |- | ||
− | | 17 || | + | | 17 || 4.3, 4.4 || Matrix Factorization || || |
|- | |- | ||
− | | 18 || 5.1 || Taylor Series || || | + | | 18 || 5.1 ||Vector Calculus Intro and Taylor Series || || |
|- | |- | ||
− | | 19 || 5.1 || Differentiation Rules Review || || | + | | 19 || 5.1, 5.2 || Differentiation Rules Review and Partial Derivatives || || |
|- | |- | ||
− | | 20 || 5.2 || | + | | 20 || 5.2 || Gradients- Examples, visualizations, computation || || |
|- | |- | ||
− | | 21 || 5. | + | | 21 || 5.3 || Gradients of Vector-Valued Functions || || |
|- | |- | ||
− | | 22 || 5. | + | | 22 || 5.4, Dhrymes 78 || Gradients of Matrices || || |
|- | |- | ||
− | | 23 || | + | | 23 || Exam 2 || || || |
|- | |- | ||
− | | 24 || 5. | + | | 24 || 5.5, Dhrymes 78 || Useful Identities for Computing Gradients || || |
|- | |- | ||
| 25 || 5.3 || Gradients of Vector-Valued Functions || || | | 25 || 5.3 || Gradients of Vector-Valued Functions || || | ||
Line 66: | Line 70: | ||
| 29 || Notes || Minimization via Newton's Method & Backpropagation || || | | 29 || Notes || Minimization via Newton's Method & Backpropagation || || | ||
|- | |- | ||
− | | 30 || | + | | 30 ||Project 2 || || || |
|- | |- | ||
− | | 31 || 5.8 || | + | | 31 || 5.8 || Multivariate Taylor Series || || |
|- | |- | ||
| 32 || Notes || Linear optimization: Simplex method || || | | 32 || Notes || Linear optimization: Simplex method || || | ||
Line 74: | Line 78: | ||
| 33 || 7.1 || Optimization Using Gradient Descent || || | | 33 || 7.1 || Optimization Using Gradient Descent || || | ||
|- | |- | ||
− | | 34 || 7.2 || Constrained Optimization and Lagrange Multipliers | + | | 34 || 7.2 and Notes || Constrained Optimization and Lagrange Multipliers: PCA || || |
− | |||
− | |||
|- | |- | ||
− | | | + | | 35 || 7.3 || Convex Optimization (time permitting) || || |
|- | |- | ||
− | | | + | | 36 || Bishop, Duda et al. || Feed-forward Artificial Neural Networks || || |
|- | |- | ||
− | | | + | | 37 || Bishop, Duda et al. || Backpropagation in ANNs || || |
|- | |- | ||
− | | | + | | 38 || Bishop, Duda et al. || Activation Functions: Linear & Nonlinear || || |
|- | |- | ||
− | | | + | | 39 || Bishop, Duda et al. || Step-by-step simple ANN || || |
|- | |- | ||
− | | | + | | 40 || Bishop, Duda et al. || Measures of performance || || |
|- | |- | ||
− | | | + | | 41 || Bishop, Duda et al. || More complex architectures of ANNs || || |
|- | |- | ||
− | | | + | | 42 || Final Project Introduction || || || |
|- | |- | ||
− | | | + | | 43 || Final project|| || || |
|- | |- | ||
− | | | + | | 44 || Review || || || |
|} | |} |
Latest revision as of 09:55, 19 September 2025
Applied Linear Algebra
Prerequisite: MAT1214/MAT1213 Calculus I
This comprehensive course in linear algebra provides an in-depth exploration of core concepts and their applications to optimization, data analysis, and neural networks. Students will gain a strong foundation in the fundamental notions of linear systems of equations, vectors, and matrices, as well as advanced topics such as eigenvalues, eigenvectors, and canonical solutions to linear systems of differential equations. The course also explores he critical techniques of calculus operations in vectors and matrices, optimization, and Taylor series in one and multiple variables. By the end of the course, students will have a thorough understanding of the mathematical framework underlying principal component analysis, gradient descent, and the implementation of simple neural networks.
The primary textbook is "Mathematics for Machine Learning" by Deisenroth, Faisal, and Ong, 2020, Cambridge University Press. The book is available for free for personal use at https://mml-book.github.io/book/mml-book.pdf
The secondary textbook is "Pattern Recognition and Machine Learning" by Bishop, 2006, Springer Information Science and Statistics. The book is available for free for personal use at https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf
Session | Section | Topic | Prerequisites | SLOs |
---|---|---|---|---|
1 | 2.1 | Systems of Linear Equations | ||
2 | 2.2 | Matrices | ||
3 | 2.3 | Solving systems of linear equations | ||
4 | 2.4 | Vector spaces | ||
5 | 2.5 | Linear Independence | ||
6 | 2.6 | Basis & Rank | ||
7 | Exam 1 | |||
8 | 2.7 | Linear Mappings | ||
9 | 2.7 | Linear Mappings (examples) | ||
10 | 3.1, 3.2, 3.3 | Norms, Inner Products, Lengths & Distances | ||
11 | 3.4 | Angles & orthogonality | ||
12 | 3.5 | Orthonormal Basis | ||
13 | 3.7 | Inner Product of Functions | ||
14 | Project 1 | |||
15 | 4.1 | Determinant and Traces | ||
16 | 4.2 | Eigenvalues & Eigenvectors | ||
17 | 4.3, 4.4 | Matrix Factorization | ||
18 | 5.1 | Vector Calculus Intro and Taylor Series | ||
19 | 5.1, 5.2 | Differentiation Rules Review and Partial Derivatives | ||
20 | 5.2 | Gradients- Examples, visualizations, computation | ||
21 | 5.3 | Gradients of Vector-Valued Functions | ||
22 | 5.4, Dhrymes 78 | Gradients of Matrices | ||
23 | Exam 2 | |||
24 | 5.5, Dhrymes 78 | Useful Identities for Computing Gradients | ||
25 | 5.3 | Gradients of Vector-Valued Functions | ||
26 | 5.4, Dhrymes 78 | Gradients of Matrices | ||
27 | 5.5, Dhrymes 78 | Useful Identities for Computing Gradients | ||
28 | 5.7 | Higher-Order Derivatives | ||
29 | Notes | Minimization via Newton's Method & Backpropagation | ||
30 | Project 2 | |||
31 | 5.8 | Multivariate Taylor Series | ||
32 | Notes | Linear optimization: Simplex method | ||
33 | 7.1 | Optimization Using Gradient Descent | ||
34 | 7.2 and Notes | Constrained Optimization and Lagrange Multipliers: PCA | ||
35 | 7.3 | Convex Optimization (time permitting) | ||
36 | Bishop, Duda et al. | Feed-forward Artificial Neural Networks | ||
37 | Bishop, Duda et al. | Backpropagation in ANNs | ||
38 | Bishop, Duda et al. | Activation Functions: Linear & Nonlinear | ||
39 | Bishop, Duda et al. | Step-by-step simple ANN | ||
40 | Bishop, Duda et al. | Measures of performance | ||
41 | Bishop, Duda et al. | More complex architectures of ANNs | ||
42 | Final Project Introduction | |||
43 | Final project | |||
44 | Review |