Scale dummy variables in logistic regression
up vote
3
down vote
favorite
Let's say I have a data set that mixes categorical and continuous features and I would like to study the relative importance of each feature in the prediction of a certain class.
For that I am using the logistic regression with an l1 penalty because I want a sparse solution that maximizes the ROCAUC.
Before training the logistic regression, I first created dummy variables for my categorical features and I centered and scaled all my features, including the dummy variables I have created.
Can I center and scale the dummy variables? Because I want to compare the coefficients of the logistic regression trained on the data set in order to rank the features.
Thanks for the help!
logistic classification importance
New contributor
add a comment |
up vote
3
down vote
favorite
Let's say I have a data set that mixes categorical and continuous features and I would like to study the relative importance of each feature in the prediction of a certain class.
For that I am using the logistic regression with an l1 penalty because I want a sparse solution that maximizes the ROCAUC.
Before training the logistic regression, I first created dummy variables for my categorical features and I centered and scaled all my features, including the dummy variables I have created.
Can I center and scale the dummy variables? Because I want to compare the coefficients of the logistic regression trained on the data set in order to rank the features.
Thanks for the help!
logistic classification importance
New contributor
add a comment |
up vote
3
down vote
favorite
up vote
3
down vote
favorite
Let's say I have a data set that mixes categorical and continuous features and I would like to study the relative importance of each feature in the prediction of a certain class.
For that I am using the logistic regression with an l1 penalty because I want a sparse solution that maximizes the ROCAUC.
Before training the logistic regression, I first created dummy variables for my categorical features and I centered and scaled all my features, including the dummy variables I have created.
Can I center and scale the dummy variables? Because I want to compare the coefficients of the logistic regression trained on the data set in order to rank the features.
Thanks for the help!
logistic classification importance
New contributor
Let's say I have a data set that mixes categorical and continuous features and I would like to study the relative importance of each feature in the prediction of a certain class.
For that I am using the logistic regression with an l1 penalty because I want a sparse solution that maximizes the ROCAUC.
Before training the logistic regression, I first created dummy variables for my categorical features and I centered and scaled all my features, including the dummy variables I have created.
Can I center and scale the dummy variables? Because I want to compare the coefficients of the logistic regression trained on the data set in order to rank the features.
Thanks for the help!
logistic classification importance
logistic classification importance
New contributor
New contributor
New contributor
asked 4 hours ago
shzt
162
162
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
8
down vote
AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.
And don't scale indicator variables. This adds confusion to the interpretation of coefficients.
Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.
1
Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
– TinglTanglBob
1 hour ago
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
8
down vote
AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.
And don't scale indicator variables. This adds confusion to the interpretation of coefficients.
Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.
1
Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
– TinglTanglBob
1 hour ago
add a comment |
up vote
8
down vote
AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.
And don't scale indicator variables. This adds confusion to the interpretation of coefficients.
Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.
1
Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
– TinglTanglBob
1 hour ago
add a comment |
up vote
8
down vote
up vote
8
down vote
AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.
And don't scale indicator variables. This adds confusion to the interpretation of coefficients.
Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.
AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.
And don't scale indicator variables. This adds confusion to the interpretation of coefficients.
Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.
answered 3 hours ago
Frank Harrell
54.1k3106238
54.1k3106238
1
Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
– TinglTanglBob
1 hour ago
add a comment |
1
Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
– TinglTanglBob
1 hour ago
1
1
Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
– TinglTanglBob
1 hour ago
Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
– TinglTanglBob
1 hour ago
add a comment |
shzt is a new contributor. Be nice, and check out our Code of Conduct.
shzt is a new contributor. Be nice, and check out our Code of Conduct.
shzt is a new contributor. Be nice, and check out our Code of Conduct.
shzt is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f381643%2fscale-dummy-variables-in-logistic-regression%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown