Simplifying Complexity
Algorithmic Contributions of Richard M. Karp
Algorithmic Contributions of Richard M. Karp
During an April award ceremony in Philadelphia, the Franklin Institute awarded the 2004 Benjamin Franklin Medal in Computer and Cognitive science to Richard M. Karp for “his contributions to the understanding of computational complexity.” Professor Karp joins an elite group of Franklin Medal laureates including John McCarthy, the creator of artificial intelligence, interactive computer systems designer Lucy Suchman, Douglas Engelbart, inventor of the computer mouse, computer language theorist Noam Chomsky, and supercomputer designer Seymour Cray. I’d like to think I’ve been fairly plugged into the computer science scene over the past 20 years. To be honest, I can use a computer mouse to write AI code in several programming languages, but “computational complexity” has got me puzzled. I’m sure its big but, as an experimentalist, I’m not sure I even know what that means. So, it’s off to the net to do some snooping…

Photo by TheDigitalArtist on Pixabay
Complexity theory is part of a larger “theory of computation” and is concerned with the time required to compute the solution to any problem. The amount of time required to find the best solution to a problem is based on its size. Say, for instance, we wish to find the maximum value contained in an array of (n) integers. A straightforward algorithm would be to set the first value as the maximum, compare every subsequent value in the array to the maximum and, if it is larger, store that value as the maximum and continue. At the end of the calculation, we would be guaranteed to have the best answer and, at most, we would require (n — 1) comparisons to obtain it. If we doubled the size of the array to be searched we would need (2n — 1) steps. The worst-case number of required comparisons follows the general form, (C n1–1n0); a polynomial equation, P, used to predict the number of steps, and hence the amount of time to arrive at the best solution. Other problems may include higher order polynomial terms, such as (45n8), but given enough processing muscle and patience the optimal answer is certain to be achieved.
The complexity issue arises when the cost in computational time does not follow a simple polynomial function (known as non-polynomial time or NP problems). Professor Herbert S. Wilf at the University of Pennsylvania describes a simple example of an NP problem in his book Algorithms and Complexity, freely available on his Web site. The problem is to determine if the integer n is a prime number (i.e., divisible only by 1 and n). The answer will be a simple Yes or No. One solution would be to see if n is evenly divisible by integer values of m = (2, 3, … n1/2 ) with a time cost function of n1/2 steps, which seems harmless enough. As you increase the number of binary digits used to represent the integer n, the problem becomes very large, very quickly. Given that it takes approximately log 2 n bits (B) to represent n in the computer, the time cost function evaluates to 2B/2. A computer capable of calculating divisions at a rate of 1 GHz would require 571 years to determine if a single 128-bit integer is prime. After that, it could begin work on the remaining 3.4 x 1038 128-bit integers. Large, intractable time cost functions such as this one are termed “hard” problems for obvious reasons. Because of its difficulty, this prime integer example serves as the basis of 128-bit RSA public-key encryption.
Faced with the growing number of “hard” problems discovered in Berkley’s newly created “Computer Science” Department in the late 1960s, Professor Karp set himself to the task of categorizing these uncharted theoretically complex NP problems. Upon examination, the NP problems all appeared to be combinatorial in nature, including the combination of cities that minimize the traveling salesman problem (TSP), the combination of colors used at the vertices of a graph that insure different adjacent colors, and the order of steps required to minimize factory production time. In 1972, Professor Karp coined the term “NP-Complete” to describe a class of hard problems that all simplify to a single hard, NP problem in a seminal publication, Reducibility among Combinatorial Problems that serves as a grand-unifying theorem of computational complexity. Over 30 years of advancements in computer hardware and technology have not brought us closer to solving NP-Complete problems on a human timescale, but multiple approaches to obtaining reasonable and best-available solutions continue to be developed. Classifying a newly discovered problem as NP-Complete makes these solutions instantly applicable.
Currently, there is no theoretical proof that a fast, easy algorithm exists to obtain the absolute best solution for any of these hard problems, nor is there proof that it does not. The payoff is that, if one is found, it will be applicable to every NP-Complete problem including those in network optimization, genomics and proteomics. Professor Karp has not discovered the grand unifying algorithm but his work has simplified the search.
This material originally appeared as a Contributed Editorial in Scientific Computing and Instrumentation 21:8 July 2004, pg. 10.
William L. Weaver is an Associate Professor in the Department of Integrated Science, Business, and Technology at La Salle University in Philadelphia, PA USA. He holds a B.S. Degree with Double Majors in Chemistry and Physics and earned his Ph.D. in Analytical Chemistry with expertise in Ultrafast LASER Spectroscopy. He teaches, writes, and speaks on the application of Systems Thinking to the development of New Products and Innovation.

