Because Algorithms Are Out There To Be Found

February 26th, 2010 by Sergio de Biasi

Only rather recently (in historical terms) have algorithms begun to receive full first-class native citizen treatment in science in general and mathematics in particular. The main concern in the pure sciences has traditionally been to collect facts (or assumptions) and then try to come up with theories that somehow summarize or explain them. When such theories are achieved and are proved to be true and complete, it has been the the implicit attitude of the academic community that their job is fundamentally done and the rest is mere “implementation details” better left for engineers and technologists.

This was in part the result of inhabiting a reality where massive amounts of calculation was something very cumbersome or downright impossible to perform. We were thus blinded by being in the state of the proverbial primitive tribe that only has concepts for one, two, three and “many”. The intrinsic algorithmic complexity of a problem was overshadowed by the fact that doing any calculation at all was already an issue, so for all practical purposes anything but the simplest computations were infeasible. So a detailed classification and understanding of the cost of algorithms and how this related to the intrinsic complexity of a problem had to wait until such fine-grained distinctions became more relevant.

This happened in a rather revolutionary and discontinuous way with the advent of fast computers. Almost overnight, the finer distinctions in cost in algorithms turned into a very real barrier between a problem being actually  solvable versus being intractable. It also became increasingly and painfully clearer that having a true and complete description of the theory that governs a system and throwing at it a huge but still finite amount of brute force computational resources is in many if not most cases still not enough to say anything useful about what the system will or even could actually do. This apparent paradox derives from the fact that knowing how to solve a problem in principle is very different from knowing how to solve if efficiently – that is, within realistic limitations and time, memory, energy, communication and so on. This had not been not as clearly perceived or studied in the past because the said limitations of time, memory, energy, communication and so on used to be very strict. But not any longer! We found ourselves in a situation where using efficient algorithms is not just a matter of optimization, or convenience, or cost. Very often it’s instead a matter of being able to have answers or not. And so suddenly it became very relevant to try to find better algorithms.

Thus, algorithms as a subject of research were ushered into the front stage by necessity. And when serious attention started being poured into this area, it became apparent that churning out good algorithms was not just a matter of tinkering and production line mechanics but rather a topic ripe with need for pure theoretical research. In particular, the fact that there is an intrinsic computational complexity for solving a problem even before we try to write one line of code was more clearly understod and much more explicitly stated. In this sense, algorithms are not really arbitrary or just an ad hoc manifestation of the need to compute something. Any correct algorithm is inevitably connected to structure of a problem, and it must somehow encode and explore this structure to succeed. Finding optimal algorithms requires a very deep understanding of the nature and structure of a problem, one that often immensely surpasses that which is necessary to merely write “some” algorithm that solves the problem. The more an algorithm approaches being optimal, the closer it is to revealing the true nature of the problem being solved, and the room for arbitrary choices is severely restricted.

So, optimal algorithms are not really invented as much as discovered; they are out there to be found.

Leave a Reply