Hi,
There have been regular large deletes over the past few months on a big table that is 84GB ( 888, 879,462 rows).. we are using db2 v8.2. I noticed that a program that was inserting records after reading data from a file has been doing so at a pretty slow pace now. As far as I can tell, the select speed isn't as badly affected. Am I correct in assuming that a REORG of the index on the table will fix the problem? If so, I tried to run the REORG command on our backup database just to try it out and got back an error that said transaction log is full.
Here is command that I gave --
db2 reorg indexes all for table <tablename> allow write access cleanup only
Here is what I have as a part of the config for the log files..
db2 get db config for dbname | grep -i logfilsiz
Log file size (4KB) (LOGFILSIZ) = 8000
db2 get db config for dbname | grep -i PRIMARY
Number of primary log files (LOGPRIMARY) = 10
db2 get db config for dbname | grep -i SECONDARY
Number of secondary log files (LOGSECOND) = 5
Can someone please help.. is there any other solution I can try out? We will eventually be splitting the huge table up based on certain ranges but until that is done I need to figure out a solution that will atleast put the insert speed back to what it was ( if not faster).
Thanks in advance..
There have been regular large deletes over the past few months on a big table that is 84GB ( 888, 879,462 rows).. we are using db2 v8.2. I noticed that a program that was inserting records after reading data from a file has been doing so at a pretty slow pace now. As far as I can tell, the select speed isn't as badly affected. Am I correct in assuming that a REORG of the index on the table will fix the problem? If so, I tried to run the REORG command on our backup database just to try it out and got back an error that said transaction log is full.
Here is command that I gave --
db2 reorg indexes all for table <tablename> allow write access cleanup only
Here is what I have as a part of the config for the log files..
db2 get db config for dbname | grep -i logfilsiz
Log file size (4KB) (LOGFILSIZ) = 8000
db2 get db config for dbname | grep -i PRIMARY
Number of primary log files (LOGPRIMARY) = 10
db2 get db config for dbname | grep -i SECONDARY
Number of secondary log files (LOGSECOND) = 5
Can someone please help.. is there any other solution I can try out? We will eventually be splitting the huge table up based on certain ranges but until that is done I need to figure out a solution that will atleast put the insert speed back to what it was ( if not faster).
Thanks in advance..