Why select after a join raises an exception in java spark dataframe?

moudi :

I have two data dataframes: left and right. They are the same consisting of three columns: src relation, dest and have the same values.

1- I tried to joind these dataframes where the condition is the dst in left = the src in right. But it was not working. Where is error?

Dataset<Row> r = left
  .join(right, left.col("dst").equalTo(right.col("src")));

Result:

+---+---------+---+---+---------+---+
|src|predicate|dst|src|predicate|dst|
+---+---------+---+---+---------+---+
+---+---------+---+---+---------+---+

2- If I renamed dst in the left as dst, and the src column in the right as dst2, then I apply a join, it works. But if I try to select some column from the optained dataframe. It raises an exception. Where is my error?

 Dataset<Row> left = input_df.withColumnRenamed("dst", "dst2");
 Dataset<Row> right = input_df.withColumnRenamed("src", "dst2");  
 Dataset<Row> r = left.join(right, left.col("dst2").equalTo(right.col("dst2")));

Then:

left.show();

gives:

+---+---------+----+
|src|predicate|dst2|
+---+---------+----+
|  a|       r1| :b1|
|  a|       r2|   k|
|:b1|       r3| :b4|
|:b1|      r10|   d|
|:b4|       r4|   f|
|:b4|       r5| :b5|
|:b5|       r9|   t|
|:b5|      r10|   e|
+---+---------+----+

and

right.show();

gives:

+----+---------+---+
|dst2|predicate|dst|
+----+---------+---+
|   a|       r1|:b1|
|   a|       r2|  k|
| :b1|       r3|:b4|
| :b1|      r10|  d|
| :b4|       r4|  f|
| :b4|       r5|:b5|
| :b5|       r9|  t|
| :b5|      r10|  e|
+----+---------+---+

result:

+---+---------+----+----+---------+---+
|src|predicate|dst2|dst2|predicate|dst|
+---+---------+----+----+---------+---+
|  a|       r1| b1| b1  |      r10|  d|
|  a|       r1| b1| b1  |       r3| b4|
|b1 |       r3| b4| b4  |       r5| b5|
|b1 |       r3| b4| b4  |       r4|  f|
+---+---------+----+----+---------+---+


Dataset<Row> r = left
  .join(right, left.col("dst2").equalTo(right.col("dst2")))
  .select(left.col("src"),right.col("dst"));

result:

Exception in thread "main" org.apache.spark.sql.AnalysisException: resolved attribute(s) dst#45 missing from dst2#177,src#43,predicate#197,predicate#44,dst2#182,dst#198 in operator !Project [src#43, dst#45];

3- suppose the selected works, how can add the obtained dataframe to the left dataframe.

Im working in Java.

jgp :

You were using:

r = r.select(left.col("src"), right.col("dst"));

It seems that Spark does not find the lineage back to the right dataframe. Not shocking as it goes through a lot of optimization.

Assuming your desired output is:

+---+---+
|src|dst|
+---+---+
| b1|:b5|
| b1|  f|
|:b4|  e|
|:b4|  t|
+---+---+

You could use one of this 3 options:

Using the col() method

Dataset<Row> resultOption1Df = r.select(left.col("src"), r.col("dst"));
resultOption1Df.show();

Using the col() static function

Dataset<Row> resultOption2Df = r.select(col("src"), col("dst"));
resultOption2Df.show();

Using the column names

Dataset<Row> resultOption3Df = r.select("src", "dst");
resultOption3Df.show();

Here is the complete source code:

package net.jgp.books.spark.ch12.lab990_others;

import static org.apache.spark.sql.functions.col;

import java.util.ArrayList;
import java.util.List;

import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;

/**
 * Self join.
 * 
 * @author jgp
 */
public class SelfJoinAndSelectApp {

  /**
   * main() is your entry point to the application.
   * 
   * @param args
   */
  public static void main(String[] args) {
    SelfJoinAndSelectApp app = new SelfJoinAndSelectApp();
    app.start();
  }

  /**
   * The processing code.
   */
  private void start() {
    // Creates a session on a local master
    SparkSession spark = SparkSession.builder()
        .appName("Self join")
        .master("local[*]")
        .getOrCreate();

    Dataset<Row> inputDf = createDataframe(spark);
    inputDf.show(false);

    Dataset<Row> left = inputDf.withColumnRenamed("dst", "dst2");
    left.show();

    Dataset<Row> right = inputDf.withColumnRenamed("src", "dst2");
    right.show();

    Dataset<Row> r = left.join(
        right,
        left.col("dst2").equalTo(right.col("dst2")));
    r.show();

    Dataset<Row> resultOption1Df = r.select(left.col("src"), r.col("dst"));
    resultOption1Df.show();

    Dataset<Row> resultOption2Df = r.select(col("src"), col("dst"));
    resultOption2Df.show();

    Dataset<Row> resultOption3Df = r.select("src", "dst");
    resultOption3Df.show();
  }

  private static Dataset<Row> createDataframe(SparkSession spark) {
    StructType schema = DataTypes.createStructType(new StructField[] {
        DataTypes.createStructField(
            "src",
            DataTypes.StringType,
            false),
        DataTypes.createStructField(
            "predicate",
            DataTypes.StringType,
            false),
        DataTypes.createStructField(
            "dst",
            DataTypes.StringType,
            false) });

    List<Row> rows = new ArrayList<>();
    rows.add(RowFactory.create("a", "r1", ":b1"));
    rows.add(RowFactory.create("a", "r2", "k"));
    rows.add(RowFactory.create("b1", "r3", ":b4"));
    rows.add(RowFactory.create("b1", "r10", "d"));
    rows.add(RowFactory.create(":b4", "r4", "f"));
    rows.add(RowFactory.create(":b4", "r5", ":b5"));
    rows.add(RowFactory.create(":b5", "r9", "t"));
    rows.add(RowFactory.create(":b5", "r10", "e"));

    return spark.createDataFrame(rows, schema);
  }
}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=105270&siteId=1