Overview
This website is about our work on the modeling of hard combinatorial optimization problems (COPs) as polynomial-sized linear programming (LP) problems. In other words, it can be shown (and we do) that the numbers of variables and constraints of our LP for a given problem are respectively bounded by polynomial functions of the problem input data, which means that they grow less than exponentially as the problem size is increased. Hence, since procedures that solve an LP in polynomial time are known (since the 1980’s), this means that an immediate consequence of our work is that it resolves (in the affirmative) the very important and long-standing question of the equality of the computational complexity classes “P” and “NP.”
Our work is not a computational complexity development per se, despite its extremely significant impact in that area. The “P = NP” result that the work affirms is essentially only incidental, and the assessment of correctness of the work is beyond the scope of existing computational complexity arguments. These arguments have been used with respect to what can/cannot be done in a way of the LP modeling of hard COPs in terms of the natural variables which are used in traditional mathematical descriptions of those problems. We provide a paper of ours detailing this on this website.
With respect to the context of computational complexity specifically, a distinguishing feature of our work, relative to approaches that have specifically focused on the “P versus NP” question in general, is its unifying nature. Ours is not an ad hoc approach which is applicable to a particular hard COP only. In fact, the main paper on this website correctly solves the Quadratic Assignment Problem (QAP), of which all the NP-Complete COPs (such as the TSP) are special cases. Another distinguishing feature of our work is the supporting software tools that we provide in order to enable the testing/experimentation of our developments and our claims using actual problem instances.
Dating back to beginnings, the “P” class has meant to capture a notion of a problem being "easy" while being (or being believed to be) outside “P” has been meant to capture a notion of a problem being "hard." The mythical status that the “P versus NP” question has come to gain has led to an increasing number and variety of vulgarization efforts of its basic concepts. One drawback of these vulgarizations over the past decade and a half however, has been that they have generally tended to over-simplify the notions of “easy” and “hard,” to the point of becoming misleading. The fact is that these notions are, fundamentally, theoretical ones and a problem being in “P” does not necessarily imply that it is actually easy to solve in practice. That is the case for our proposed models.
The model which was originally the main subject of this website was several orders of magnitude “smaller” than its previous versions. However, through the challenge we had issued, counter-examples to this model was brought to our attention. All of the results in the paper describing that model are correct, except that those are not sufficient for implying the main conclusion, which is the integrality of the model. "Flow balance" constraints need to be explicitly enforced in that model, and is not possible without "reverting" back to the initial higher dimensional space of our modeling.
The insights gained through the procedure for generating the counter-examples has allowed us to develop a much more streamlined and, at the same time, generic version of our initial higher-dimensional models. The new model is provided on this site. The details on the counter-examples will be available at a future time.