百万千万级数据量 数据库优化(二)

1、我做过的项目里有个与人民银行的数据交换,他们用的是分批次的纯文本(直接用c从数据库里导出来)。
譬如201003-1.txt,201003-2.txt...
每个文本控制在10M左右,ftp传输。
简单可靠。

2、(1)

优化前的语句:
Java代码 
# Query_time: 5.967435 Lock_time: 0.000129 Rows_sent: 1 Rows_examined: 803401  
SET timestamp=1286843575;  
select livemessag0_.id as id38_, livemessag0_.isactive as isactive38_, livemessag0_.content as content38_, livemessag0_.createtime as createtime38_, livemessag0_.userid as userid38_, livemessag0_.objectid as objectid38_, livemessag0_.recordid as recordid38_, livemessag0_.type as type38_ from live_message livemessag0_ where (livemessag0_.objectid in (select livescrip1_.id from live_scrip livescrip1_ where livescrip1_.senderid='ff8080812aebac2d012aef6491b3666d')) and livemessag0_.type=2 limit 6;  

优化后的语句:
Java代码 
select livemessag0_.id as id38_, 
        livemessag0_.isactive as isactive38_, 
        livemessag0_.content as content38_, 
        livemessag0_.createtime as createtime38_, 
        livemessag0_.userid as userid38_, 
        livemessag0_.objectid as objectid38_, 
        livemessag0_.recordid as recordid38_, 
        livemessag0_.type as type38_ 
from live_scrip livescrip1_ left join  
  live_message livemessag0_ 
  on livescrip1_.id=livemessag0_.objectid 
  where livescrip1_.senderid = 'ff8080812aebac2d012aef6491b3666d' and 
       livemessag0_.type = 2 
limit 6; 

总结:尽量少用子查询用表连接的方式代替(如果表连接不是太复杂的话),这样优化后大概能减少1/3的时间,后来发现livemessag0_.objectid竟然没有建立索引。

3、大数据量修改批量提交

博客分类: oracle
SQLBlog
引自:http://sunxboy.iteye.com/blog/153886

大数据量修改批量提交

例如: 原来的语句为

____DELETE FROM HUGETABLE WHERE condition;

可用如下语句代替:

____BEGIN
________LOOP
____________DELETE FROM HUGETABLE
____________WHERE condition
____________AND ROWNUM<10000;
____________EXIT WHEN SQL%NOTFOUND;
____________COMMIT;
________END LOOP;
____END;

猜你喜欢

转载自lixg425.iteye.com/blog/1874940