Search results
Jun 30, 2023 · Explicitly, a logistic regression does no classification, instead returning predicted probabilities of event occurrence. However, the machine learning terminology seems to refer to problems as “classification” problems when the observed outcomes are categorical (e.g., dog vs cat), which is a major use case for a logistic regression.
Dec 8, 2014 · Logistic regression is a regression model because it estimates the probability of class membership as a (transformation of a) multilinear function of the features. Frank Harrell has posted a number of answers on this website enumerating the pitfalls of regarding logistic regression as a classification algorithm.
Jul 20, 2015 · The hypothesis in logistic regression provides a measure of uncertainty in the occurrence of a binary outcome based on a linear model. The output is bounded asymptotically between $0$ and $1$ , and depends on a linear model, such that when the underlying regression line has value $0$ , the logistic equation is $0.5 = \frac{e^0}{1+e^0}$ , providing a natural cutoff point for classification purposes.
Jun 23, 2023 · After running the model, the classification becomes 94.1%, but predicts several events occurring. This is with the cut at 0.5. If I adjust the cut to be 0.2 (which I read on another thread on this site for rare event logistic regressions), the overall classification drops from 94.2% in Block 0 to 91.6% in Block 1.
Feb 27, 2019 · I have a binary classification problem where the classes are slightly unbalanced 25%-75% distribution. I have a total of around 35 features after some feature engineering and the features I have are mostly continuous variables. I tried fitting a Logistic Model, an RF model and and XGB Model. They all seem to give me the same performance.
$\begingroup$ What is the difference between comparing the fitted vs. actual values of a logistic regression and calculating the predicted probabilities on a training data set and using them to test the predictive accuracy on a testing data set? $\endgroup$ –
Feb 16, 2018 · One main difference of classification trees and logistic regression is that the former outputs classes (-1,1) while the logistic regression outputs probs. One idea is to choose the best feature X from a set of features and pick up a threshold (0.5?) to convert the probs to classes and then use a weighted logistic regression to find the next feature etc.
A use case I faced recently, trying to fit a logistic regression to the data in the figure below: Here are the values of the intercept, w1 for the Age and w2 for EstimatedSalary, (array([-2.24944689e-10]), array([[-2.10415172e-09, -2.69301403e-06]])), The model has dismissed/neglected the age feature, Decision boundary in this case is illustrated in Figure below
Aug 27, 2015 · When I use logistic regression, the prediction is always all '1' (which means good loan). I have never seen this before, and do not know where to start in terms of trying to sort out the issue. There are 22 columns with 600K rows. When I decrease the # of columns I get the same result with logistic regression.
$\begingroup$ A neural network can have a single layer and then it can be equivalent to a logistic regression model (depends on the choice of activation function). As a matter of fact, neural networks historically started by being single layered (e.g. Perceptron). $\endgroup$