Complexity theory

Complexity theory is the study of the efficiency of algorithms with respect to several metrics, usually time and memory usage. Complexity theorists aim to classify different problems into classes of difficulty and study the relations that hold between the classes.

When studying computability, we are concerned with the identification of that which is or not computable in an ideal sense, without worrying about time or memory limitations.

However, often in practice we have to be more pragmatic. A program which takes a googol years to run is not going to see much use. If you need more GbBytes to solve a computational problem that atoms exist in the universe, you may as well go ahead and declare the problem unsolvable for all practical purposes.

Complexity theory raises the standards of computability in drawing the boundary between that which you can do with a computer and that which you cannot. It concerns the study of the asymptotic behavior of programs when fed inputs of growing size, in terms of the resources they consume. The kind of resources with which complexity theorists work more often are the time a program takes to finish and the highest memory usage notethe memory usage is frequently called space complexity in any given point of the execution.

Complexity theory allows us to have a deeper understanding of what makes an algorithm efficient, which in turn allows us to develop better and faster algorithms. Surprisingly enough, it turns out that proving that some computational problems are hard to solve has incredibly important practical applications in cryptography.

The abstract framework in which we develop this theory are Turing machines and decision problems. In this context, the time complexity is associated with the number of steps a TM takes to halt and return an output, while the space complexity corresponds to the length of the tape we would need for the TM to never fall off when moving left or right.

One may worry that the complexity measures are highly dependent on the choice of computational framework used. After all, if we allow our TM to skip two spaces per step each time it moves it is going to take potentially half the time to compute something. However, the asymptotic behavior of complexity is surprisingly robust, though there are some caveats.

The most interesting characterization of complexity comes in the form of complexity classes, which break down the family of decision problems into sets of problems which can be solved with particular constraints.

Perhaps the most important complexity class is \(P\), the class of decision problems which can be efficiently computed noteFor example, checking whether a graph is connected or not. The second best known class is \(NP\), the class of problems whose solutions can be easily checked noteAn example is factoring a number: it is hard to factor \(221\), but easy to multiply \(13\) and \(17\) and check that \(13 \cdot 17 = 221\) . It is an open problem whether those two classes are one and the same; that is, that every problem whose solutions are easy to check is also easy to solve.

There are many more important complexity classes, and it can be daunting to contemplate the sheer variety with which complexity theory deals. Feel free to take a guided tour though the complexity zoo if you want an overview of some of the most relevant.

An important concept is that of a reduction. Some complexity classes have problems such that if you were able to solve them efficiently you could translate other problems in the class to this one and solve them efficiently. Those are called complete problems of a complexity class.

This page is meant to be a starting point to learn complexity theory from an entry level. If there is any concept which feels mysterious to you, try exploring the greenlinks in their order of appearance. If you feel like the concepts presented are too basic, try a different lens.



  • Mathematics

    Mathematics is the study of numbers and other ideal objects that can be described by axioms.