In Fall 2023 I will teach a course on Parameterized Algorithms (Similar to the course taught in Spring 2019) - we will (likely) cover chapters 1-7, 13 and 14 of the Parameterized Algorithms book, plus possibly some more recent research papers. There are no formal requirements, but the course assumes basic algorithms (CMPSC130B or similar), at least some knowledge of NP-completeness, as well as ability to read, write and understand mathematical proofs.
The topic of the class can be briefly summarized as follows: So your problem is NP-hard. Now what? One approach to dealing with NP-hard problems is Parameterized Algorithms. Here we aim for algorithms that potentially are super slow on some instances, but are fast on the instances that we care about. Traditional worst case running time analysis measures the running time in terms of just one parameter n - the size of the input. In parameterized complexity we measure the running time as a function of the size n, and (at least) one additional parameter k. The hope is that (a) the instances that we care about solving have low value of k, and that (b) we are able to design algorithm whose running time is good as long as k is small. For NP-hard problems this translates to algorithms with running times of the form f(k)n^c where f is an arbitrary function of k and c is a constant independent of k. Such algorithms, and problems that admit such algorithms are called fixed parameter tractable (FPT). In this class we will consider algorithm design techniques for making FPT algorithms, and also for making FPT algorithms that are as fast as possible. Here "as fast as possible" means that f(k) grows as slowly as possible with k, and that c is as low as possible. We will also look into how to prove (under complexity-theoretic assumptions, similar to P not equal to NP) that certain problems do not have FPT algorithms at all.