Foreword
Scala herein to operate with JDBC.
In this paper, the original ecology of the scala language to write, work is not recommended to write, work is a framework to write.
MySQL needs to operate:
MySQL JDBC programming routines:
1) MySQL driver (the driver needs to come and operate MySQL)
2) need to get connection information, which is a heavyweight process, to go through the disk IO, network IO, the general With a connection to the pool, used to get from the inside, run out also back. Direct access connection with the pool than the performance is higher
3) SQL Statement to pass SQL statements into
4) ResultSet result set to get the SQL execution
5) close release resources
IO programming routines:
1) Open the resource
2) business processing
3) release of resources
Code development
First add anything of dependencies in pom, xml file
<properties>
<scala.version>2.11.8</scala.version>
<mysql.version>5.1.28</mysql.version>
</properties>
<dependencies>
<!--Scala的依赖-->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<!--添加MySQL驱动-->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.28</version>
</dependency>
</dependencies>
code show as below:
package com.ruozedata.bigdata.scala03
import java.sql.DriverManager
object ScalaJDBCApp {
def main(args: Array[String]): Unit = {
val url = "jdbc:mysql://hadoop001:3306/ruoze_d6"
val user = "root"
val password = "123456"
val sql = "select DB_ID,DB_LOCATION_URI,NAME from dbs"
//这个地方在生产上要加上
//这个地方的作用是把Java驱动加进来
// Class.forName("com.mysql.jdbc.Driver") //Java中的 scala也可以用
classOf[com.mysql.jdbc.Driver] //scala中的
val connetcion = DriverManager.getConnection(url,user,password)
val stmt =connetcion.createStatement() //通过connection创建一个Statement
val rs = stmt.executeQuery(sql) //通过Statement,执行SQL语句
while (rs.next()){
val dbid = rs.getLong(1)
val location = rs.getString(2)
val name = rs.getString(3)
println(dbid + " " + location + " " + name)
}
rs.close()
stmt.close()
connetcion.close()
}
}
operation result;
1 hdfs://10-9-140-90:9000/user/hive/warehouse default
6 hdfs://10-9-140-90:9000/user/hive/warehouse/d6_test.db d6_test
11 hdfs://10-9-140-90:9000/user/hive/warehouse/test.db test
16 hdfs://10-9-140-90:9000/d6_hive/directory test2
21 hdfs://10-9-140-90:9000/user/hive/warehouse/g6_hadoop.db g6_hadoop
31 hdfs://hadoop001:9000/user/hive/warehouse/g6.db g6
Process finished with exit code 0
The above code simply wrote down process, in fact, abnormal capture what not added to the list.