No suitable driver found for jdbc:mysql://192.168.25.121:3306/spark

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq403580298/article/details/83095370

INFO TaskSetManager: Lost task 1.0 in stage 4.0 (TID 7) on executor localhost: java.sql.SQLException (No suitable driver found for jdbc:mysql://192.168.25.121:3306/spark) [duplicate 1]

18/10/16 18:13:46 INFO DAGScheduler: Job 2 failed: foreachPartition at IpLocation.scala:104, took 0.104398 s
Exception in thread “main” org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 6, localhost): java.sql.SQLException: No suitable driver found for jdbc:mysql://192.168.25.121:3306/spark
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at cn.itcast.rdd.IpLocation . d a t a 2 m y s q l ( I p L o c a t i o n . s c a l a : 54 ) a t c n . i t c a s t . r d d . I p L o c a t i o n .data2mysql(IpLocation.scala:54) at cn.itcast.rdd.IpLocation a n o n f u n anonfun main 2. a p p l y ( I p L o c a t i o n . s c a l a : 104 ) a t c n . i t c a s t . r d d . I p L o c a t i o n 2.apply(IpLocation.scala:104) at cn.itcast.rdd.IpLocation a n o n f u n anonfun main 2. a p p l y ( I p L o c a t i o n . s c a l a : 104 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(IpLocation.scala:104) at org.apache.spark.rdd.RDD a n o n f u n anonfun foreachPartition 1 1 a n o n f u n anonfun apply 28. a p p l y ( R D D . s c a l a : 902 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 28.apply(RDD.scala:902) at org.apache.spark.rdd.RDD a n o n f u n anonfun foreachPartition 1 1 a n o n f u n anonfun apply 28. a p p l y ( R D D . s c a l a : 902 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t 28.apply(RDD.scala:902) at org.apache.spark.SparkContext a n o n f u n anonfun runJob 5. a p p l y ( S p a r k C o n t e x t . s c a l a : 1899 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t 5.apply(SparkContext.scala:1899) at org.apache.spark.SparkContext a n o n f u n anonfun runJob 5. a p p l y ( S p a r k C o n t e x t . s c a l a : 1899 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . R e s u l t T a s k . r u n T a s k ( R e s u l t T a s k . s c a l a : 70 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . T a s k . r u n ( T a s k . s c a l a : 86 ) a t o r g . a p a c h e . s p a r k . e x e c u t o r . E x e c u t o r 5.apply(SparkContext.scala:1899) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) at org.apache.spark.scheduler.Task.run(Task.scala:86) at org.apache.spark.executor.Executor TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

在这里插入图片描述
解决方案:
在pom.xml文件中添加Mysql驱动

<dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>6.0.6</version>
</dependency>

猜你喜欢

转载自blog.csdn.net/qq403580298/article/details/83095370