INFO TaskSetManager: Lost task 1.0 in stage 4.0 (TID 7) on executor localhost: java.sql.SQLException (No suitable driver found for jdbc:mysql://192.168.25.121:3306/spark) [duplicate 1]
18/10/16 18:13:46 INFO DAGScheduler: Job 2 failed: foreachPartition at IpLocation.scala:104, took 0.104398 s Exception in thread “main” org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 6, localhost): java.sql.SQLException: No suitable driver found for jdbc:mysql://192.168.25.121:3306/spark at java.sql.DriverManager.getConnection(DriverManager.java:689) at java.sql.DriverManager.getConnection(DriverManager.java:247) at cn.itcast.rdd.IpLocation.data2mysql(IpLocation.scala:54)atcn.itcast.rdd.IpLocationanonfunmain2.apply(IpLocation.scala:104)atcn.itcast.rdd.IpLocationanonfunmain2.apply(IpLocation.scala:104)atorg.apache.spark.rdd.RDDanonfunforeachPartition1anonfunapply28.apply(RDD.scala:902)atorg.apache.spark.rdd.RDDanonfunforeachPartition1anonfunapply28.apply(RDD.scala:902)atorg.apache.spark.SparkContextanonfunrunJob5.apply(SparkContext.scala:1899)atorg.apache.spark.SparkContextanonfunrunJob5.apply(SparkContext.scala:1899)atorg.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)atorg.apache.spark.scheduler.Task.run(Task.scala:86)atorg.apache.spark.executor.ExecutorTaskRunner.run(Executor.scala:274) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)