Apache Spark, Scala, Approximation Algorithm for 0-1 Knapsack

A quick pass at the Scala version of an approximation solution to the  0-1 knapsack problem is very close to the Python version. The post of this is over here.

Using a window function via SQL may have been a better way to go to find the partial sums of weights, after calculating all the ratios of item profits over the select function used:

sum(weights) OVER (ORDER BY ratio desc) as partSumWeights

and the Windows function would likely be more clear.

Apache Spark should be able to be used for all types of problems that can be expressed directly using data-parallelism, and with specific coding, more complex algorithms.

It will be interesting to see how Spark Dataframes perform in terms of run-time analysis for more complex parallel algorithms, well more complex than the greedy 0-1 knapsack.

 

Advertisements

Apache Spark Python Source for 0-1 Knapsack Approximation Solution

Over on the other blog, posted a 0-1 Knapsack approximation solution in Python for Apache Spark. The source code for this was tested in a local installation on Ubuntu 16.04 for Apache Spark pre-built for Hadoop 2.7 and later.

The original source code contains setup code for the Spark / Python environment, as well as a Scala solution. I do not think it will run in Databricks without some modification, although it’s possible to copy the knapsack.py file contents into the same Spark Python notebook entry as the test code to try it out.

Original 0-1 approximation source code is over on Github.