Apache Spark, Scala, Approximation Algorithm for 0-1 Knapsack

A quick pass at the Scala version of an approximation solution to the  0-1 knapsack problem is very close to the Python version. The post of this is over here.

Using a window function via SQL may have been a better way to go to find the partial sums of weights, after calculating all the ratios of item profits over the select function used:

sum(weights) OVER (ORDER BY ratio desc) as partSumWeights

and the Windows function would likely be more clear.

Apache Spark should be able to be used for all types of problems that can be expressed directly using data-parallelism, and with specific coding, more complex algorithms.

It will be interesting to see how Spark Dataframes perform in terms of run-time analysis for more complex parallel algorithms, well more complex than the greedy 0-1 knapsack.

 

Merging DataFrames Columns for Apache Spark in Python without a Key Column

At the Databricks Q&A site, looked at how to take DataFrames of identical length in rows andperform merge on columns into one DataFrame without an existing key.

If we include the function monotonically_increasing_id() to give an increasing IDs to rows, then merging with join works. Not shown here we could pad the smaller dataframe (fewer rows) to be the same length as the longest. This is needed to make monotonically_increasing_id() give the same IDs for both of the DataFrames.

# For two Dataframes that have the same number of rows, merge all columns, row by row.

# Get the function monotonically_increasing_id so we can assign ids to each row, when the
# Dataframes have the same number of rows.
from pyspark.sql.functions import monotonically_increasing_id

#Create some test data with 3 and 4 columns.
df1 = sqlContext.createDataFrame([("foo", "bar","too","aaa"), ("bar", "bar","aaa","foo"), ("aaa", "bbb","ccc","ddd")], ("k", "K" ,"v" ,"V"))
df2 = sqlContext.createDataFrame([("aaa", "bbb","ddd"), ("www", "eee","rrr"), ("jjj", "rrr","www")], ("m", "M" ,"n"))

# Add increasing Ids, and they should be the same.
df1 = df1.withColumn("id", monotonically_increasing_id())
df2 = df2.withColumn("id", monotonically_increasing_id())

# Perform a join on the ids.
df3 = df2.join(df1, "id", "outer").drop("id")
df3.show()

And at Github have created a GitHub repository to store Spark code snippets as I work on them.