Why would a Clustered Index Seek return a higher “Actual Number of Rows” than there are rows in the...
up vote
4
down vote
favorite
I'm troubleshooting an issue related to SQL Server (Azure SQL Database technically) occasionally choosing a bad execution plan, presumably due to skewed stats. sp_updatestats
fixes it every time, until a few hours or days later when a bad plan gets cached again.
Looking at the "bad" plan, I noticed something that strikes me as odd: there is a Clustered Index Seek on a table that currently has about 1.7 million rows. The "Estimated Number of Rows" for this operation is about 1200, which is definitely in line with the average row count I would expect from that operation in this case, but the "Actual Number of Rows" is in excess of 60 million! Following the fat line from this leaf node, various downstream operations such as joins and sorts are being performed on all 60 million, causing excessive slowness, spills to tempdb, and other badness.
I must be misunderstanding what a Clustered Index Seek actually does, because I wouldn't think it's possible for it to "output" more rows than are in the underlying table. What could cause this? And better yet, any pointers on how to fix it?
[Bonus points for including something like "sp_updatestats
fixes it every time but can't figure out how to fix it permanently? Go read this article." This has been a general problem for us on a few different fronts lately.]
sql-server optimization execution-plan azure-sql-database
add a comment |
up vote
4
down vote
favorite
I'm troubleshooting an issue related to SQL Server (Azure SQL Database technically) occasionally choosing a bad execution plan, presumably due to skewed stats. sp_updatestats
fixes it every time, until a few hours or days later when a bad plan gets cached again.
Looking at the "bad" plan, I noticed something that strikes me as odd: there is a Clustered Index Seek on a table that currently has about 1.7 million rows. The "Estimated Number of Rows" for this operation is about 1200, which is definitely in line with the average row count I would expect from that operation in this case, but the "Actual Number of Rows" is in excess of 60 million! Following the fat line from this leaf node, various downstream operations such as joins and sorts are being performed on all 60 million, causing excessive slowness, spills to tempdb, and other badness.
I must be misunderstanding what a Clustered Index Seek actually does, because I wouldn't think it's possible for it to "output" more rows than are in the underlying table. What could cause this? And better yet, any pointers on how to fix it?
[Bonus points for including something like "sp_updatestats
fixes it every time but can't figure out how to fix it permanently? Go read this article." This has been a general problem for us on a few different fronts lately.]
sql-server optimization execution-plan azure-sql-database
1
Can you post the plan on Paste the plan please?
– George.Palacios
6 hours ago
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
6 hours ago
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
6 hours ago
add a comment |
up vote
4
down vote
favorite
up vote
4
down vote
favorite
I'm troubleshooting an issue related to SQL Server (Azure SQL Database technically) occasionally choosing a bad execution plan, presumably due to skewed stats. sp_updatestats
fixes it every time, until a few hours or days later when a bad plan gets cached again.
Looking at the "bad" plan, I noticed something that strikes me as odd: there is a Clustered Index Seek on a table that currently has about 1.7 million rows. The "Estimated Number of Rows" for this operation is about 1200, which is definitely in line with the average row count I would expect from that operation in this case, but the "Actual Number of Rows" is in excess of 60 million! Following the fat line from this leaf node, various downstream operations such as joins and sorts are being performed on all 60 million, causing excessive slowness, spills to tempdb, and other badness.
I must be misunderstanding what a Clustered Index Seek actually does, because I wouldn't think it's possible for it to "output" more rows than are in the underlying table. What could cause this? And better yet, any pointers on how to fix it?
[Bonus points for including something like "sp_updatestats
fixes it every time but can't figure out how to fix it permanently? Go read this article." This has been a general problem for us on a few different fronts lately.]
sql-server optimization execution-plan azure-sql-database
I'm troubleshooting an issue related to SQL Server (Azure SQL Database technically) occasionally choosing a bad execution plan, presumably due to skewed stats. sp_updatestats
fixes it every time, until a few hours or days later when a bad plan gets cached again.
Looking at the "bad" plan, I noticed something that strikes me as odd: there is a Clustered Index Seek on a table that currently has about 1.7 million rows. The "Estimated Number of Rows" for this operation is about 1200, which is definitely in line with the average row count I would expect from that operation in this case, but the "Actual Number of Rows" is in excess of 60 million! Following the fat line from this leaf node, various downstream operations such as joins and sorts are being performed on all 60 million, causing excessive slowness, spills to tempdb, and other badness.
I must be misunderstanding what a Clustered Index Seek actually does, because I wouldn't think it's possible for it to "output" more rows than are in the underlying table. What could cause this? And better yet, any pointers on how to fix it?
[Bonus points for including something like "sp_updatestats
fixes it every time but can't figure out how to fix it permanently? Go read this article." This has been a general problem for us on a few different fronts lately.]
sql-server optimization execution-plan azure-sql-database
sql-server optimization execution-plan azure-sql-database
edited 5 hours ago
asked 6 hours ago
Todd Menier
30239
30239
1
Can you post the plan on Paste the plan please?
– George.Palacios
6 hours ago
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
6 hours ago
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
6 hours ago
add a comment |
1
Can you post the plan on Paste the plan please?
– George.Palacios
6 hours ago
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
6 hours ago
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
6 hours ago
1
1
Can you post the plan on Paste the plan please?
– George.Palacios
6 hours ago
Can you post the plan on Paste the plan please?
– George.Palacios
6 hours ago
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
6 hours ago
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
6 hours ago
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
6 hours ago
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
6 hours ago
add a comment |
1 Answer
1
active
oldest
votes
up vote
6
down vote
accepted
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
5 hours ago
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
3 hours ago
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
3 hours ago
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
3 hours ago
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
6
down vote
accepted
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
5 hours ago
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
3 hours ago
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
3 hours ago
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
3 hours ago
add a comment |
up vote
6
down vote
accepted
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
5 hours ago
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
3 hours ago
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
3 hours ago
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
3 hours ago
add a comment |
up vote
6
down vote
accepted
up vote
6
down vote
accepted
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
edited 5 hours ago
answered 5 hours ago
Forrest
1,862517
1,862517
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
5 hours ago
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
3 hours ago
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
3 hours ago
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
3 hours ago
add a comment |
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
5 hours ago
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
3 hours ago
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
3 hours ago
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
3 hours ago
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
5 hours ago
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
5 hours ago
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
3 hours ago
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
3 hours ago
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
3 hours ago
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
3 hours ago
@ToddMenier - you need to see if you can get accurate estimates for
SELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates 58
and has actual 9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.– Martin Smith
3 hours ago
@ToddMenier - you need to see if you can get accurate estimates for
SELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates 58
and has actual 9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.– Martin Smith
3 hours ago
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f224793%2fwhy-would-a-clustered-index-seek-return-a-higher-actual-number-of-rows-than-th%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Can you post the plan on Paste the plan please?
– George.Palacios
6 hours ago
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
6 hours ago
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
6 hours ago