SF2524 / FSF3580 HT22 Matrix Computations for Large-scale Systems

Welcome

 

qranim-crop.png

 

Welcome to this course

In this course we will learn some of the most common numerical techniques and algorithms used to efficiently solve problems expressed using large matrices. We focus on detailed understanding about the performance of these methods when they are applied to large-scale systems and study topics such as convergence, accuracy and efficiency.

Programming language for the homeworks in this course.

Julia specific information for this course: Julia instructions

Teachers: Elias Jarlebring and Siobhan Correnty.

 

Why matrix computations?

The most common computational approaches to scientific problems involve, in some way, a subproblem containing a matrix. The efficiency and robustness of the computational method used for this matrix problem is often crucial for the efficiency and robustness of the entire approach. After an accurate discretization Links to an external site.,  a partial differential equation will be approximated by a problem involving a large matrix. Google's page-rank algorithm Links to an external site. involves the computation with a huge transition matrix representing links between web pages. The large amounts of information in data science and AI needs to be processed, often, with the linear algebra formulations in this course http://www.fast.ai/2017/07/17/num-lin-alg/ Links to an external site.. In order to achieve good efficiency and robustness for these approaches, you need detailed understanding of matrix computations.

 

Screenshot_20181026_134150.png

 

 

What do I need to do?

The examination of the course consists of correctly completing

  • Homework 1: Homework 1
  • Homework 2: Homework 2
  • Homework 3: Homework 3
  • Asynchronous activities. You have to do everything marked as "asynchronous activities". See Course plan and Assignments. These are mandatory and must be done, at the latest, one week after the exam. The CANVAS-specified deadline is recommendation.
  • Exam
  • Only for SF3580: Homework 4 (only for SF3580):
  • Only for SF3580: Mandatory active learning workspace work.

The course has mandatory homeworks, and each homework can give you bonus points to the exam. If you in addition complete the course training tasks you can get additional bonus. See further description on the page for Homework & bonus points rules.

 

How do I learn what all this?

The learning activities of the course consist of lectures, asynchronous activities (videos/canvas quizzes), homeworks and ALW-work. The course plan is given on the page: Course plan

 

Prerequisites

  • A basic course in Numerical Methods, for instance SF1544 (or equivalent).
  • Recommended but not strictly necessary: A continuation course in numerical analysis, for instance SF2520 applied numerical methods, which can be read in parallel.

This course is elective for all master's programmes at KTH. The course is conditionally elective in the second year of the Master's programme in applied and computational mathematics, and mandatory for the  Computational mathematics (beräkningsmatematik) track. It is elective for the Master's programme in Computer simulation for science and engineering (COSSE, TDTNM). This course is suitable for students taking the programme Computer science and engineering (CDATE) if you have taken the course SF1626 Calculus in Several Variable (flervariabelanalys).

This course is a subset of the PhD-level course SF3580 - Numerical linear algebra which is taught jointly every even year (2022, 2024, etc).

Contents:

The course consists of blocks:

  • Block 1: Large sparse eigenvalue problems
    • Topics: Power method, Rayleigh quotient iteration, Arnoldi's method, Repeated Gram-Schmidt, Modified Gram-Schmidt, convergence characterization for Arnoldi's method for eigenvalue problems
    • block1.pdf Download block1.pdf
  • Block 2: Large sparse linear systems
  • Block 3: Dense eigenvalue algorithms (QR-method)
  • Block 4: Functions of matrices
    • Topics: Taylor definition, Jordan definition, Contour integral definition, Schur-Parlett method, scaling-and-squaring for the matrix exponential, methods for the matrix square root, methods for the sign function
    • block4.pdf Download block4.pdf
  • Block 5: (only for PhD students taking SF3580) Matrix equations

 

 

ChatGPT is not allowed to be used in this course, except to create problems in the ALW  (and then it should be specified). I have taken disciplinary action.