Polynomial time algorithms for an NP-hard problem X does not exist, unless P=NP
Use to optimize finding a solution in NP-hard if the decision version of it is NP-complete
Notation:
An optimization problem: X
Instance of X: x
solution of x returned by an algorithm A: A(x)
the value of a possible solution y of x: val(y)
the optimal value of instance x: Opt(x)
example: knapsack problem (O(2n) or O(nW)), any solution with weight less than W is a solution but there can be only 1 optimal solution
Difficulty:
Prove an approximation is guaranteed
Compare the solution return by the algorithm with an optimal solution that is computationally very hard to find
α approximation algorithm:
An algorithm A is said to be an α-approximation algorithm for an optimization problem X if for every instance x of X, A runs in poly-time and outputs a solution y such that:
val(y) ≥α Opt(x) if X is a maximization problem
val(y) ≤α Opt(x) if X is a minimization problem
α≤1 for maximization and α≥1 for minimization problem
If α is close to 1, then y is close to the optimum solution; α=1 then A is the exact algorithm
Trade-offs:
One of these 3 can not be guaranteed:
Runs in polynomial times
Solves arbitrary instances of the probnlem
Find optimal solution to problem
α-approximation algorithm:
Has polynomial run-time
Can solve arbitrary instances of the problem
Find solution whose value is within α of optimum
Techniques for designing approximation algorithms:
Proof. Let x⋆ (a 0-1 vector) be an optimal solution, and T∗,T be respectively the total profit of the optimal solution x∗ and the solution returned by the greedy
Let j be the first item not included by the greedy algorithm.
The profit at that point is pj=p1+⋯+pj−1≤T
The overall occupancy at this point is wj=w1+⋯+wj−1≤W
We will show that T∗≤pˉj+pj
Hence, T≥max{pˉj,pj}≥1/2T∗
Proof. We will show that T∗≤pj+pj
We have that ∑i=1npixi∗≤∑i=1j−1pixi∗+∑i=jnwjpjwixi∗=wjpj∑i=jnwixi∗+∑i=1j−1(pi−wjpjwi)xi∗