Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What is the most important NP-complete problem?
5 points by jsprogrammer on Dec 15, 2014 | hide | past | favorite | 9 comments
If it were possible to efficiently solve a large NP-complete problem, the solution to which real-world problem would have the largest positive impact?

I'm interested in actual, concrete problems where a solution would have a real, obvious, and positive impact. I'm not interested in (except in-so-far-as the real-world problem can be re-formulated as) the theoretical "toy" problems (travelling salesman, subgraph isomorphism, subset sum, etc.).



The so called toy problems are "generalization" of many real-world problems. Which means if you could solve these problems efficiently then that (single) algorithm can be used to solve problem in different areas of engineering, physics, biology, chemistry, shipping etc. Ask any of the professional in these areas and the "real world" problem that they would describe are basically NP-complete problem i.e the solution space is huge and the only way they are trying to solve them as of now is using brute force i.e try each possible solution and check if it works or not, which is obviously not an efficient way. Most of the time the problems are kind of optimization problems - "How to use the resources I have to make most of it" OR finding the right combination - "What sort of composition of all the chemicals would produce the desired effect?"

Once in a while a breakthrough happen in these areas and some problems are now become simple but that doesn't meant they solved a NP problem, that means they have discovered that the problem was not NP complete problem. So if you really want to solve the real world problems you would basically find some insight in that specific problem which would make it efficient to solve that particular problem but it wont mean that you solved a NP problem, it would just mean that you proved that the problem is not NP problem.


I get that they are generalizations; that is why I'm asking for specific instances.

Say you were trying to develop a general algorithm to solve NP-complete problems in P-time. You want this algorithm so that you can solve actual, real-world problems. Now imagine that you only have enough computing power to solve 1 problem per week; you'd likely want to prioritize the problems you solve based on the 'value' derived from solving them. I'm asking, what is the problem with the highest value?

Where could you have the most impact? Optimizing global shipping routes? Power-generation scheduling in Arizona? Encryption breaking?


The problem with the highest value will be very subjective as everyone from different fields will say that their problem is of highest value. I would suggest that you can pick a problem from a domain that you are most familiar with to understand the problem and it's impact. Wait... Are you asking about a start-up idea ;)


That's a good point. I'm looking for highest value by the metric of something like 'weighted sum of subjective value of all problems'.

Not really about a start-up idea. I've been talking to a person who claims to have such an algorithm, but is looking for an actual instance of a real-world problem.

So far, no one I've asked has been able to describe such a problem.


It once cropped up for seemingly simple everyday problem that I will describe here:

When I was still a graduate student (1980's), I was helping out some local businesses. There was a need to print a lot of receipts, and these receipts would be of some size, measured in number of lines. Say 10, 35, 20, 5, 12 etc. The paper size would be about 60 lines.

And then, we thought we should reduce the number of papers actually consumed for printing, by fitting as many receipts in single paper (We had cheap manpower to cut the pages manually.) Ordering of receipts was immaterial.

So given a simple list of (receipt no, #lines), how do you re-order them and create groupings so that each of that group size is less than 66 (in this case), and yet, the wasted size from all groups is minimized?

It is famous Knapsack problem.


The knapsack Problem (the classic 0-1 version) is solvable through Dynamic Programming in O(n^2) time

The problem you described is the Multiple Knapsack Problem, which is indeed NP-complete.


I don't know which reduction you'd use exactly, but efficiently solving Weather Modeling would be positive for most everybody. I wouldn't reduce it to vertex cover, for example, but 3SAT solver might be a more useful tool.


Isn't school roster scheduling NP-complete?


I found some papers that claim it is. Do you know of a real-world dataset?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: