Natural Language Processing Across Genres

March 27th, 2011 by Sergio de Biasi


Natural Language Processing Across Genres
From News To Novels

Presented by Kathleen McKeown on 2011.03.25 at the opening event for SALTS.

Tournament Heroes

March 5th, 2011 by Sergio de Biasi


Tournament Heroes

Presented by Maria Chudnovsky on 2011.03.02 at the Rutgers Discrete Math Seminar.

Lower Bounds For The Permanent

January 16th, 2011 by Sergio de Biasi


(Leonid Gurvits’s Breakthrough About)
Lower Bounds For The Permanent

Presented by Donald Knuth on 2011.01.13 at the Stanford CS Theory Lunch.

Disjoint Paths In Tournaments

September 28th, 2010 by Sergio de Biasi

Disjoint Paths In Tournaments

Presented by Paul Seymour on 2010.09.21 at the Columbia Discrete Math Seminar.

Maximal Zone Diagrams and Their Computation

June 9th, 2010 by Sergio de Biasi

Presented by Sergio de Biasi on 2010.06.09 at ISVD 2010 in Quebec, Canada.

The paper being presented can be found here :
computer.org/​portal/​web/​csdl/​doi/​10.1109/​ISVD.2010.10

Avoiding Simplicity is Complex

May 25th, 2010 by Sergio de Biasi

This is the title of a recent paper by Prof. Eric Allender at Rutgers University. Who better than the author himself to explain what the paper is all about? The full text of the paper is available here. So without further ado, here is a talk that Prof. Allender gave about this paper on April 15, 2010.


Avoiding Simplicity is Complex

I apologize for the crunchy noises from people eating during the presentation; it is after all a tradition at Rutgers that breakfast is served for people who attend the computer science theory seminar. :-)

Because Algorithms Are Out There To Be Found

February 26th, 2010 by Sergio de Biasi

Only rather recently (in historical terms) have algorithms begun to receive full first-class native citizen treatment in science in general and mathematics in particular. The main concern in the pure sciences has traditionally been to collect facts (or assumptions) and then try to come up with theories that somehow summarize or explain them. When such theories are achieved and are proved to be true and complete, it has been the the implicit attitude of the academic community that their job is fundamentally done and the rest is mere “implementation details” better left for engineers and technologists.

This was in part the result of inhabiting a reality where massive amounts of calculation was something very cumbersome or downright impossible to perform. We were thus blinded by being in the state of the proverbial primitive tribe that only has concepts for one, two, three and “many”. The intrinsic algorithmic complexity of a problem was overshadowed by the fact that doing any calculation at all was already an issue, so for all practical purposes anything but the simplest computations were infeasible. So a detailed classification and understanding of the cost of algorithms and how this related to the intrinsic complexity of a problem had to wait until such fine-grained distinctions became more relevant.

This happened in a rather revolutionary and discontinuous way with the advent of fast computers. Almost overnight, the finer distinctions in cost in algorithms turned into a very real barrier between a problem being actually  solvable versus being intractable. It also became increasingly and painfully clearer that having a true and complete description of the theory that governs a system and throwing at it a huge but still finite amount of brute force computational resources is in many if not most cases still not enough to say anything useful about what the system will or even could actually do. This apparent paradox derives from the fact that knowing how to solve a problem in principle is very different from knowing how to solve if efficiently – that is, within realistic limitations and time, memory, energy, communication and so on. This had not been not as clearly perceived or studied in the past because the said limitations of time, memory, energy, communication and so on used to be very strict. But not any longer! We found ourselves in a situation where using efficient algorithms is not just a matter of optimization, or convenience, or cost. Very often it’s instead a matter of being able to have answers or not. And so suddenly it became very relevant to try to find better algorithms.

Thus, algorithms as a subject of research were ushered into the front stage by necessity. And when serious attention started being poured into this area, it became apparent that churning out good algorithms was not just a matter of tinkering and production line mechanics but rather a topic ripe with need for pure theoretical research. In particular, the fact that there is an intrinsic computational complexity for solving a problem even before we try to write one line of code was more clearly understod and much more explicitly stated. In this sense, algorithms are not really arbitrary or just an ad hoc manifestation of the need to compute something. Any correct algorithm is inevitably connected to structure of a problem, and it must somehow encode and explore this structure to succeed. Finding optimal algorithms requires a very deep understanding of the nature and structure of a problem, one that often immensely surpasses that which is necessary to merely write “some” algorithm that solves the problem. The more an algorithm approaches being optimal, the closer it is to revealing the true nature of the problem being solved, and the room for arbitrary choices is severely restricted.

So, optimal algorithms are not really invented as much as discovered; they are out there to be found.

Algorithms Illustrated

February 26th, 2010 by Sergio de Biasi

Mathematicians are (in)famous for the habit of describing the most complicated theoretical constructs without giving a single example (or very few or obscure ones). Computer scientists are a bit less inclined to this sin, but still commit it, specially as we venture deeper into the realm of theory. So it often happens as I’m reading a paper on a new data structure, or algorithm, or idea, that I’m deciphering paragraph after paragraph of seemingly arbitrary definitions and although I follow the algebra and the logic I am still wondering what is actually going on. Then after sometimes almost going insane trying to come up with a concrete meaningful example of an actual object with the desired properties, I think : “Oh, so THAT is what it was all about.”

To make this more concrete, consider the following text : “Consider a function on the natural numbers that we will call S(n), which for all elements in its domain is defined as the smallest element in the domain which is still greater than the argument. Now, consider the function SS(m,n) defined recursively as SS(m,0)=m and SS(m,n)=SS(S(m),SI(n)), where SI(n) is the inverse of S. Finally, consider the function SSS(m,n), defined recursively as SSS(m,0)=0, SSS(0,n)=0, and SSS(m,n)=SS(SSS(m,SI(n)),m).”

Now, if you are a mathematician or computer scientist, maybe you were able to figure right away what I’m talking about. That is almost certainly however because you already know what I’m talking about, you are familiar with these ideas and this structure. SSS is simply multiplication. Now, of course it is often necessary and even desirable to express certain ideas within a certain formal conceptual framework, for a series of technical reasons. But it is my opinion that this should not come without an accompanying explanation of the motivation and meaning for these ideas, a discussion of the reasons for the structures under consideration being defined in this particular way to start with, and last but not least, illustrative examples of what it is that is being described. Not doing it comes at the cost of leaning towards the unintelligible.

Unfortunately, this is not the tradition in academic journals, and in fact “explaining too much” is for some reason implicitly (or sometimes even explicitly) frowned upon. A paper with actual detailed examples has a high probability of being considered too “text-bookish”. So too many times coming up with crucial insights is “left to the reader”, when a single detailed example would have sufficed to clarify the whole concept. Of course when actually reading such papers, it’s very difficult to understand them without in the process coming up with such examples ourselves, which can lead to varying degrees of despair depending on how the paper was written. As elegant as it might be to concoct the shorter and most concise definition that captures the structure you are describing, this is by far not an efficient and useful way to communicate ideas to most human readers, specially not those who are not already familiar with what you are talking about.

After we do decode such papers, however, all the intution and intermediate intellectual scaffolding that we built to get there stays only within ourselves, and other readers have to go through the same (in my view often unnecessarily) laborious process. So I thought to myself – maybe I should share and write about some of the ideas and insights which I had while trying to understand. And of course in the process I may sometimes wander to different directions. Anyway, here it is. Algorithms Illustrated.