费了比较多的精力终于解决了这个疑难问题,在百度上查阅了大量博客,论坛,一直没有放弃。通过自己的反复试验,像福尔摩斯抽丝剥茧一样终于找到问题的原因,确实很有必要记述下来,下面将解决该问题的来龙去脉细细道来。
我们的网管平台的作业计划采集总是在运行了一段时间之后出现了java.sql.SQLException: 关闭的连接问题。异常堆栈如下:
- java.sql.SQLException: 关闭的连接
- at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70 )
- at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:110 )
- at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:171 )
- at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:227 )
- at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:439 )
- at oracle.sql.CLOB.getDBAccess(CLOB.java:1083 )
- at oracle.sql.CLOB.getAsciiStream(CLOB.java:228 )
- at org.hibernate.lob.SerializableClob.getAsciiStream(SerializableClob.java:45 )
- at com.wri.hy.itmanagerv2.dg.dao.TaskParamDao.queryAllTaskParam(TaskParamDao.java:99 )
- at com.wri.hy.itmanagerv2.dg.autotask.dataservce.TaskParamService.queryAllTaskParam(TaskParamService.java:33 )
- at com.wri.hy.itmanagerv2.dg.autotask.sched.SchedController.init(SchedController.java:96 )
- at com.wri.hy.itmanagerv2.dg.autotask.sched.SchedReadDB.run(SchedReadDB.java:59 )
- at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
- at java.util.concurrent.FutureTask$Sync.innerRunAndReset(Unknown Source)
- at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
- at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101 (Unknown Source)
- at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(Unknown Source)
- at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
- at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
- at java.lang.Thread.run(Unknown Source)
查网上说“关闭的连接”是因为连接池中已经没有打开的Connection了,这个连接池是以前的同事仿造bonecp写的,应该不会有啥问题啊,我在连 接池代码里加了很多打印日志,还是没看出有什么问题来,过了很久我就把思路放在分析其它做数据库保存或者修改操作的代码,看是否在保存过程中的问题引发了 连接被关闭。发现在做采集数据保存时候出现了如下异常。
- org.springframework.dao.DataAccessResourceFailureException: Hibernate operation: Could not execute JDBC batch update; SQL [insert into ITMANAGERV2_PLANRESULTINFO (TASK_ID, RESULT, getTime, EXCEPTION_FLAG, EXCEPTION_LINE, ID) values (?, ?, ?, ?, ?, ?)]; Io 异 常: Software caused connection abort: socket write error; nested exception is java.sql.BatchUpdateException: Io 异 常: Software caused connection abort: socket write error
- Caused by: java.sql.BatchUpdateException: Io 异常: Software caused connection abort: socket write error
- at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:602 )
- at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9350 )
- at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:210 )
- at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58 )
- at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195 )
- at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:91 )
- at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:86 )
- at org.hibernate.jdbc.AbstractBatcher.prepareBatchStatement(AbstractBatcher.java:171 )
- at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2048 )
- at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2427 )
- at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:51 )
- at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:248 )
- at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:232 )
- at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:139 )
- at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297 )
- at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27 )
- at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985 )
- at org.springframework.orm.hibernate3.HibernateAccessor.flushIfNecessary(HibernateAccessor.java:394 )
- at org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:367 )
- at org.springframework.orm.hibernate3.HibernateTemplate.saveOrUpdateAll(HibernateTemplate.java:688 )
- at com.wri.hy.itmanagerv2.dg.dao.ResultLogDao.saveCollections(ResultLogDao.java:13 )
- at com.wri.hy.itmanagerv2.dg.autotask.sched.ResultLogThread.run(ResultLogThread.java:157 )
- at java.lang.Thread.run(Unknown Source)
- at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
- at java.lang.Thread.run(Unknown Source)
oh my god,一句简单的hibernateTemplate的saveOrUpdateAll居然会引起这个异常,在百度,谷歌上根据这个异常找来找去也找不 到一个说得有用的。只好老老实实开始分析存储代码,存储的POJO是这样定义的。
- public class PlanResultInfo {
- private Long id; //id
- private PlanTaskInfo taskInfo; //所属作业计划任务
- private Clob result; //执行结果
- private String getTime; //获取时间
- private String exception_flag; //异常标识
- private String exception_line; //异常行号
- /**
- * 存储执行的结果
- */
- private transient String strResult;
- public String getException_flag() {
- return exception_flag;
- }
- public void setException_flag(String exception_flag) {
- this .exception_flag = exception_flag;
- }
- public String getException_line() {
- return exception_line;
- }
- public void setException_line(String exception_line) {
- this .exception_line = exception_line;
- }
- public String getGetTime() {
- return getTime;
- }
- public void setGetTime(String getTime) {
- this .getTime = getTime;
- }
- public Long getId() {
- return id;
- }
- public void setId(Long id) {
- this .id = id;
- }
- public PlanTaskInfo getTaskInfo() {
- return taskInfo;
- }
- public void setTaskInfo(PlanTaskInfo taskInfo) {
- this .taskInfo = taskInfo;
- }
- public Clob getResult() {
- return result;
- }
- public void setResult(Clob result) {
- this .result = result;
- }
- public String getStrResult() {
- return strResult;
- }
- public void setStrResult(String strResult) {
- this .strResult = strResult;
- }
- }
其中result是个clob类型的字段,在hibernate的配置文件中result字段是这样定义的。
- <property name= "result" type= "java.sql.Clob" update= "true" insert= "true" >
- <column name="RESULT" ></column>
- </property>
这样写感觉没问题啊,怎么会Software caused connection abort: socket write error,后来发现保存进clob的结果有时候String长度非常大,length都有几十万,算起来有100多K,这种情况下就会出现异常,但中小 量的数据是能够正常插入的。但是clob不是号称能保存4G的数据吗。我决定死马当活马医了,参照网上别的clob保存方式来重构代码。下面是第一版重构 代码:
- Session session = null ;
- Transaction tx = null ;
- java.io.Writer writer = null ;
- Reader reader = null ;
- try {
- session = this .getSessionFactory().openSession();
- planResultInfo.setResult(Hibernate.createClob(" " ));
- tx = session.beginTransaction();
- // 保存维护作业计划结果
- session.save(planResultInfo);
- session.flush();
- // 刷新
- session.refresh(planResultInfo, LockMode.UPGRADE);
- tx.commit();
- LOGGER.infoT("END TASK " + planResultInfo.getTaskInfo().getId());} catch (Exception e) {
- LOGGER.exception(e);
- // 回滚事务
- session.getTransaction().rollback();
- } finally {
- if (session != null ) {
- if (session.isOpen()) {
- // 关闭session
- session.close();
- }
- }
- }
很可惜,保存大的clob又出现了新鲜异常:
- org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update
- at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103 )
- at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:91 )
- at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43 )
- at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:202 )
- at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235 )
- at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:139 )
- at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297 )
- at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27 )
- at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985 )
- at com.wri.hy.itmanagerv2.dg.dao.ResultLogDao.saveCollection(ResultLogDao.java:50 )
- at com.wri.hy.itmanagerv2.dg.autotask.sched.ResultLogThread.run(ResultLogThread.java:159 )
- at java.lang.Thread.run(Unknown Source)
- at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
- at java.lang.Thread.run(Unknown Source)
- Caused by: java.sql.BatchUpdateException: 无法从套接字读取更多的数据
- at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:345 )
- at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10844 )
- at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58 )
- at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195 )
- ... 11 more
在网上查有关这个异常的问题也无所获,后来又是换ojdbc14.jar,也同样没作用。然后在网上看到有人在博客中写了一种写流的办法存clob,就出了重构第二版:
- Session session = null ;
- Transaction tx = null ;
- java.io.Writer writer = null ;
- try {
- session = this .getSessionFactory().openSession();
- planResultInfo.setResult(Hibernate.createClob(" " ));
- tx = session.beginTransaction();
- // 保存维护作业计划结果
- session.save(planResultInfo);
- session.flush();
- // 刷新
- session.refresh(planResultInfo, LockMode.UPGRADE);
- org.hibernate.lob.SerializableClob cb = (org.hibernate.lob.SerializableClob) planResultInfo
- .getResult();
- java.sql.Clob wrapClob = (java.sql.Clob) cb.getWrappedClob();
- if (wrapClob instanceof oracle.sql.CLOB) {
- oracle.sql.CLOB clob = (oracle.sql.CLOB) wrapClob;
- writer = new BufferedWriter(clob.getCharacterOutputStream());
- String contentStr = planResultInfo.getStrResult();
- LOGGER.infoT("result size : " + contentStr.length());
- writer.write(contentStr);
- writer.flush();
- }
- // 保存维护作业计划日志
- session.save(planLogInfo);
- tx.commit();
- LOGGER.infoT("END TASK " + planResultInfo.getTaskInfo().getId());
- } catch (Exception e) {
- LOGGER.exception(e);
- // 回滚事务
- session.getTransaction().rollback();
- } finally {
- try {
- if ( null != writer)
- writer.close();
- } catch (IOException ioe) {
- LOGGER.exception(ioe);
- }
- if (session != null ) {
- if (session.isOpen()) {
- // 关闭session
- session.close();
- }
- }
- }
这个可是人家博客上写的言之凿凿的啊,可是保存还是出现异常(这个和起初的保存出的异常一模一样)。
- java.io.IOException: Io 异常: Software caused connection abort: socket write error
- at oracle.jdbc.driver.DatabaseError.SQLToIOException(DatabaseError.java:519 )
- at oracle.jdbc.driver.OracleClobWriter.write(OracleClobWriter.java:122 )
- at java.io.Writer.write(Unknown Source)
- at java.io.Writer.write(Unknown Source)
- at com.wri.hy.itmanagerv2.dg.dao.ResultLogDao.saveCollection(ResultLogDao.java:64 )
- at com.wri.hy.itmanagerv2.dg.autotask.sched.ResultLogThread.run(ResultLogThread.java:159 )
- at java.lang.Thread.run(Unknown Source)
- at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
- at java.lang.Thread.run(Unknown Source)
这里很明显看到是java.io.Writer.write出现的异常,可能以前的博客作者没有保存大数据量的clob数据,这样会造成连接中断,最后一次重构终于成功,代码如下:
- Session session = null ;
- Transaction tx = null ;
- java.io.Writer writer = null ;
- Reader reader = null ;
- try {
- session = this .getSessionFactory().openSession();
- planResultInfo.setResult(Hibernate.createClob(" " ));
- tx = session.beginTransaction();
- // 保存维护作业计划结果
- session.save(planResultInfo);
- session.flush();
- // 刷新
- session.refresh(planResultInfo, LockMode.UPGRADE);
- org.hibernate.lob.SerializableClob cb = (org.hibernate.lob.SerializableClob) planResultInfo
- .getResult();
- java.sql.Clob wrapClob = (java.sql.Clob) cb.getWrappedClob();
- if (wrapClob instanceof oracle.sql.CLOB) {
- oracle.sql.CLOB clob = (oracle.sql.CLOB) wrapClob;
- writer = new BufferedWriter(clob.getCharacterOutputStream());
- String contentStr = planResultInfo.getStrResult();
- LOGGER.infoT("result size : " + contentStr.length());
- // // 截取前200000个字符,不然会出现异常
- // if (contentStr.length() > 200000) {
- // contentStr = contentStr.substring(0, 200000);
- // }
- reader = new BufferedReader( new StringReader(contentStr));
- char [] buffer = new char [ 1024 ];
- int length;
- while ((length = reader.read(buffer)) > 0 ) {
- writer.write(buffer, 0 , length);
- writer.write(contentStr);
- writer.flush();
- }
- // writer.write(contentStr);
- // OutputStream outputStream = clob.getAsciiOutputStream();
- //
- // InputStream inputStream = new BufferedInputStream(
- // new ByteArrayInputStream(contentStr.getBytes()));
- // byte[] buff = new byte[10240];
- //
- // int len;
- //
- // while ((len = inputStream.read(buff)) > 0) {
- // outputStream.write(buff, 0, len);
- // outputStream.flush();
- // }
- // inputStream.close();
- // outputStream.close();
- }
- // 保存维护作业计划日志
- session.save(planLogInfo);
- tx.commit();
- LOGGER.infoT("END TASK " + planResultInfo.getTaskInfo().getId());
- } catch (Exception e) {
- LOGGER.exception(e);
- // 回滚事务
- session.getTransaction().rollback();
- } finally {
- try {
- if ( null != reader)
- reader.close();
- if ( null != writer)
- writer.close();
- } catch (IOException ioe) {
- LOGGER.exception(ioe);
- }
- if (session != null ) {
- if (session.isOpen()) {
- // 关闭session
- session.close();
- }
- }
- }
这里要重点说明,对于大数据量的clob写入,必须用缓冲流循环写入字符数组,虽然执行时间长的,但可以执行成功,不会出现异常,
总结:对于hibernate clob保存,如果clob中的数据量较小,普通saveorUpdate即可保存成功,但对于大数据量会导致连接断开,从而导致耗尽连接池中的连接,改 用流来写入,也同样不能直接写入全部String,也会引起connection abort,此时需要按缓冲流读写,降低连接过程中的io效率,这样就能保证插入大数据量的clob信息。
这就是从解决java.sql.SQLException: 关闭的连接最终解决大数据量clob保存的流的保存方式的一个解决方案。希望能给其它解决clob保存的同仁一些参考。