Sunday, March 25, 2012

Comparing Keys

Hi! I Have a Table with composed Primary Key and need to validate in a
stored procedure that new rows from a temporary table that would be inserted does not exist already.
I use to have a code similar to this for many tables, even with composed keys

Delete Det_Order

Where Ltrim(Rtrim(Convert(Char(20),id))) +
Ltrim(Rtrim(Convert(Char(20),renglon)))

In (Select Ltrim(Rtrim(Convert(Char(20),id))) +
Ltrim(Rtrim(Convert(Char(20),renglon)))
From ##TmpDet_Order )


But I just dont now what happen with this case that the process just hold and never ends. Both fields that I have in as keys are integer. Would you have any idea of what happens? or maybe you can tell me another way to make the same. Please ...

I have also tried this..
Delete from Det_Order
Where [id] + renglon

In (Select [id] + renglon
From ##Det_Order )

Rather than use the IN condition, try the TSQL DELETE extension; something like:

Delete Det_Order
from ##TmpDet_Order a
join Det_Order b
on a.[id] = b.[id]
and a.renglon = b.renglon

In general, I try to use the TSQL extensions sparingly, but this seems like a good spot for use.

|||I have also tried Kent, but have the same problem |||

Try running this query and let us know record count returned; have you tried limiting the number of rows being deleted in a single batch?

select count(*)
from ##TmpDet_Order (nolock) a
join Det_Order (nolock) b
on a.[id] = b.[id]
and a.renglon = b.renglon

|||

ok, my rowcount is 4677

The purpose of this process is to run every day and getting data from a day before.

|||

Well, that isn't nominally enough to grieve anything. My knee-jerk reaction is to run profiler to see what is going on; you also might be able to get at this by running SP_LOCK while your delete query is hung because it sounds like you might be blocked. If you will perform a search on this site you will find posts that discuss diagnosis of performance problems.

More importantly, I'd like to recruit opinions from others that are more experienced.

Kent

|||I could be a resource contention issue. I would also check the indexing on the key.|||

How long does it take for a similar SELECT query to execute?

select

b.*

from ##TmpDet_Order a
join Det_Order b
on a.[id] = b.[id]
and a.renglon = b.renglon

I have occasionally encountered problems with DELETE statements that are deleting from large tables.

How many rows are in Det_Order?

How many indexes are associated with Det_Order?

Are any of the indexes on Det_Order CLUSTERED indexes? Maintaining clustered indexes has a performance penalty -- but I would not have expected it to be substantial when deleting only 4700 rows.

|||

Ok, I have a clusterd index on the field last_date_change and have anotherone with id + renglon.

Could it afect the order?

|||

If your table is large, my understanding is that considerable effort must be expended to adjust the data to conform to the clustered index. Such maintenance is required merely as a result of deleting records -- it does not depend on the columns you used to determine which records to delete.

But you cannot have TWO clustered indexes on a single table.

You never told us how large your table is (rows, as well as megabytes), only that you were deleting 4700 rows, or so.

Dan

|||Oh yes, my table has 5,917,688 rows.. and it has a cluster index by the field of date.|||

That sounds like a lot of data to move about, to maintain the clustered index.

In any case, I would like to suggest that you make a copy of your table and change the CLUSTERED index to a NON-CLUSTERED index. Test your query against this copy of the table. (Unfortunately this change will increase the size of your indexes (by approx. 50 MB, if a DATETIME column is the only column of the CLUSTERED index), but I am expecting that it will improve the performance of any INSERT, UPDATE, and DELETE actions you perform on this table.)

If you frequently perform millions of INSERT, UPDATE, and DELETE actions on this table, you may want to consider using FILLFACTOR = 50, or maybe 80, depending on the quantity of such actions. (For some of my processing, where I have only 100,000 rows in a table when I create the index, but at later stages of processing I add another 200,000 rows that will be intermingled with these others, I set the FILLFACTOR = 30 to leave room in the index for the new rows. Some of my INSERT actions that formerly took 7-10 minutes then took only a few seconds.)

|||Thanks everyobody : )

No comments:

Post a Comment