May 9, 2017 Darrell Ulm pushed to master at drulm/electron-gen caeb7e1 Merge pull request #1 from drulm/generate 490829b Boilerplate code created from electron generator View comparison for these 2 commits »
A quick pass at the Scala version of an approximation solution to the 0-1 knapsack problem is very close to the Python version. The post of this is over here.
Using a window function via SQL may have been a better way to go to find the partial sums of weights, after calculating all the ratios of item profits over the select function used:
sum(weights) OVER (ORDER BY ratio desc) as partSumWeights
and the Windows function would likely be more clear.
Apache Spark should be able to be used for all types of problems that can be expressed directly using data-parallelism, and with specific coding, more complex algorithms.
It will be interesting to see how Spark Dataframes perform in terms of run-time analysis for more complex parallel algorithms, well more complex than the greedy 0-1 knapsack.
Over on the other blog, posted a 0-1 Knapsack approximation solution in Python for Apache Spark. The source code for this was tested in a local installation on Ubuntu 16.04 for Apache Spark pre-built for Hadoop 2.7 and later.
The original source code contains setup code for the Spark / Python environment, as well as a Scala solution. I do not think it will run in Databricks without some modification, although it’s possible to copy the knapsack.py file contents into the same Spark Python notebook entry as the test code to try it out.
At the Databricks Q&A site, looked at how to take DataFrames of identical length in rows andperform merge on columns into one DataFrame without an existing key.
If we include the function monotonically_increasing_id() to give an increasing IDs to rows, then merging with join works. Not shown here we could pad the smaller dataframe (fewer rows) to be the same length as the longest. This is needed to make monotonically_increasing_id() give the same IDs for both of the DataFrames.
# For two Dataframes that have the same number of rows, merge all columns, row by row. # Get the function monotonically_increasing_id so we can assign ids to each row, when the # Dataframes have the same number of rows. from pyspark.sql.functions import monotonically_increasing_id #Create some test data with 3 and 4 columns. df1 = sqlContext.createDataFrame([("foo", "bar","too","aaa"), ("bar", "bar","aaa","foo"), ("aaa", "bbb","ccc","ddd")], ("k", "K" ,"v" ,"V")) df2 = sqlContext.createDataFrame([("aaa", "bbb","ddd"), ("www", "eee","rrr"), ("jjj", "rrr","www")], ("m", "M" ,"n")) # Add increasing Ids, and they should be the same. df1 = df1.withColumn("id", monotonically_increasing_id()) df2 = df2.withColumn("id", monotonically_increasing_id()) # Perform a join on the ids. df3 = df2.join(df1, "id", "outer").drop("id") df3.show()
And at Github have created a GitHub repository to store Spark code snippets as I work on them.
Here is a link to a blog-post by Darrell Ulm to do a performance audit for the new Drupal 8 on the Drush command-line and the Drupal site_audit module, posted back in May 2016.
At: https://libraries.io/github/drulm is the profile page on Libraries.io which contains some of the Open Source software contributions made by Darrell Raymond Ulm.