🎯 Logistic Regression

Binary & Multi-class Classification

📖 Introduction

Despite its name, Logistic Regression is a classification algorithm, not regression! It's used to predict categorical outcomes (e.g., spam/not spam, disease/no disease) by estimating probabilities using the logistic function.

🎯 What is Logistic Regression?

Logistic Regression predicts the probability that an instance belongs to a particular class. The output is between 0 and 1, achieved using the sigmoid function.

Sigmoid Function:

σ(z) = 1 / (1 + e⁻ᶻ)

Where z = β₀ + β₁x₁ + β₂x₂ + ... + βₙxₙ

import numpy as np
import matplotlib.pyplot as plt

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

z = np.linspace(-10, 10, 100)
y = sigmoid(z)

plt.figure(figsize=(10, 6))
plt.plot(z, y, linewidth=2)
plt.axhline(y=0.5, color='r', linestyle='--', label='Decision Boundary (0.5)')
plt.axvline(x=0, color='g', linestyle='--', alpha=0.5)
plt.xlabel('z (linear combination)')
plt.ylabel('σ(z) - Probability')
plt.title('Sigmoid Function')
plt.grid(True)
plt.legend()
plt.show()

# Key properties
print("σ(0) =", sigmoid(0))    # 0.5
print("σ(∞) =", sigmoid(100))  # ≈1
print("σ(-∞) =", sigmoid(-100)) # ≈0

🔧 How It Works

  1. Linear Combination: z = β₀ + β₁x₁ + β₂x₂ + ...
  2. Apply Sigmoid: p = σ(z) = probability between 0 and 1
  3. Make Decision: If p ≥ 0.5, predict class 1; else class 0
  4. Calculate Loss: Use log loss (binary cross-entropy)
  5. Optimize: Use gradient descent to minimize loss

Log Loss (Binary Cross-Entropy):

J(β) = -(1/m) Σ [y·log(p) + (1-y)·log(1-p)]

  • y = actual class (0 or 1)
  • p = predicted probability
  • m = number of samples

🐍 Binary Classification Example

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.preprocessing import StandardScaler

# Generate synthetic dataset
np.random.seed(42)
n_samples = 200

# Class 0
X_class0 = np.random.randn(n_samples//2, 2) + np.array([2, 2])
y_class0 = np.zeros(n_samples//2)

# Class 1
X_class1 = np.random.randn(n_samples//2, 2) + np.array([5, 5])
y_class1 = np.ones(n_samples//2)

X = np.vstack([X_class0, X_class1])
y = np.hstack([y_class0, y_class1])

# Visualize data
plt.figure(figsize=(8, 6))
plt.scatter(X[y==0, 0], X[y==0, 1], c='blue', label='Class 0', alpha=0.6)
plt.scatter(X[y==1, 0], X[y==1, 1], c='red', label='Class 1', alpha=0.6)
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title('Binary Classification Dataset')
plt.legend()
plt.grid(True)
plt.show()

# Split data
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Scale features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Train model
model = LogisticRegression()
model.fit(X_train_scaled, y_train)

# Predictions
y_pred = model.predict(X_test_scaled)
y_pred_proba = model.predict_proba(X_test_scaled)

# Evaluate
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy:.2%}")

print("\nConfusion Matrix:")
print(confusion_matrix(y_test, y_pred))

print("\nClassification Report:")
print(classification_report(y_test, y_pred))

# Display some predictions with probabilities
print("\nSample Predictions:")
for i in range(5):
    print(f"True: {int(y_test[i])}, Pred: {int(y_pred[i])}, " +
          f"Prob(0): {y_pred_proba[i][0]:.3f}, Prob(1): {y_pred_proba[i][1]:.3f}")

# Visualize decision boundary
def plot_decision_boundary(model, X, y, scaler):
    x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
    y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
    xx, yy = np.meshgrid(np.linspace(x_min, x_max, 200),
                         np.linspace(y_min, y_max, 200))
    
    Z = model.predict(scaler.transform(np.c_[xx.ravel(), yy.ravel()]))
    Z = Z.reshape(xx.shape)
    
    plt.figure(figsize=(10, 6))
    plt.contourf(xx, yy, Z, alpha=0.3, cmap='RdYlBu')
    plt.scatter(X[y==0, 0], X[y==0, 1], c='blue', label='Class 0', edgecolors='k')
    plt.scatter(X[y==1, 0], X[y==1, 1], c='red', label='Class 1', edgecolors='k')
    plt.xlabel('Feature 1')
    plt.ylabel('Feature 2')
    plt.title('Logistic Regression Decision Boundary')
    plt.legend()
    plt.grid(True)
    plt.show()

plot_decision_boundary(model, X_test, y_test, scaler)

🌍 Multi-class Classification

Logistic Regression can handle multiple classes using One-vs-Rest (OvR) or Multinomial approach.

from sklearn.datasets import load_iris
from sklearn.multiclass import OneVsRestClassifier

# Load multi-class dataset
iris = load_iris()
X, y = iris.data, iris.target

# Split data
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Method 1: Multinomial (default in sklearn)
model_multi = LogisticRegression(multi_class='multinomial', max_iter=200)
model_multi.fit(X_train, y_train)

# Method 2: One-vs-Rest
model_ovr = LogisticRegression(multi_class='ovr')
model_ovr.fit(X_train, y_train)

# Compare
y_pred_multi = model_multi.predict(X_test)
y_pred_ovr = model_ovr.predict(X_test)

print("Multinomial Accuracy:", accuracy_score(y_test, y_pred_multi))
print("OvR Accuracy:", accuracy_score(y_test, y_pred_ovr))

print("\nMultinomial Classification Report:")
print(classification_report(y_test, y_pred_multi, 
                          target_names=iris.target_names))

🔧 Logistic Regression from Scratch

class LogisticRegressionScratch:
    def __init__(self, learning_rate=0.01, iterations=1000):
        self.lr = learning_rate
        self.iterations = iterations
        self.weights = None
        self.bias = None
        
    def sigmoid(self, z):
        return 1 / (1 + np.exp(-np.clip(z, -500, 500)))
    
    def fit(self, X, y):
        n_samples, n_features = X.shape
        self.weights = np.zeros(n_features)
        self.bias = 0
        
        # Gradient descent
        for i in range(self.iterations):
            # Forward pass
            linear_pred = np.dot(X, self.weights) + self.bias
            predictions = self.sigmoid(linear_pred)
            
            # Calculate gradients
            dw = (1/n_samples) * np.dot(X.T, (predictions - y))
            db = (1/n_samples) * np.sum(predictions - y)
            
            # Update parameters
            self.weights -= self.lr * dw
            self.bias -= self.lr * db
            
            # Print loss every 100 iterations
            if i % 100 == 0:
                loss = -np.mean(y*np.log(predictions + 1e-15) + 
                              (1-y)*np.log(1-predictions + 1e-15))
                print(f"Iteration {i}, Loss: {loss:.4f}")
    
    def predict(self, X):
        linear_pred = np.dot(X, self.weights) + self.bias
        y_pred = self.sigmoid(linear_pred)
        class_pred = [0 if y <= 0.5 else 1 for y in y_pred]
        return np.array(class_pred)
    
    def predict_proba(self, X):
        linear_pred = np.dot(X, self.weights) + self.bias
        return self.sigmoid(linear_pred)

# Test custom implementation
model_scratch = LogisticRegressionScratch(learning_rate=0.1, iterations=1000)
model_scratch.fit(X_train_scaled, y_train)

y_pred_scratch = model_scratch.predict(X_test_scaled)
accuracy_scratch = accuracy_score(y_test, y_pred_scratch)
print(f"\nCustom Model Accuracy: {accuracy_scratch:.2%}")

📊 Model Evaluation Metrics

from sklearn.metrics import roc_curve, roc_auc_score, precision_recall_curve

# Get probability predictions
y_proba = model.predict_proba(X_test_scaled)[:, 1]

# ROC Curve
fpr, tpr, thresholds = roc_curve(y_test, y_proba)
roc_auc = roc_auc_score(y_test, y_proba)

plt.figure(figsize=(12, 5))

# Plot ROC Curve
plt.subplot(1, 2, 1)
plt.plot(fpr, tpr, linewidth=2, label=f'ROC (AUC = {roc_auc:.2f})')
plt.plot([0, 1], [0, 1], 'k--', label='Random')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend()
plt.grid(True)

# Precision-Recall Curve
precision, recall, _ = precision_recall_curve(y_test, y_proba)
plt.subplot(1, 2, 2)
plt.plot(recall, precision, linewidth=2)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curve')
plt.grid(True)

plt.tight_layout()
plt.show()

⚙️ Hyperparameters

C (Regularization)

Inverse of regularization strength. Smaller C = stronger regularization

penalty

'l1' (Lasso) or 'l2' (Ridge). Controls regularization type

solver

Algorithm: 'liblinear', 'saga', 'lbfgs', etc.

max_iter

Maximum number of iterations for convergence

from sklearn.model_selection import GridSearchCV

# Define parameter grid
param_grid = {
    'C': [0.001, 0.01, 0.1, 1, 10, 100],
    'penalty': ['l1', 'l2'],
    'solver': ['liblinear']
}

# Grid search
grid_search = GridSearchCV(
    LogisticRegression(max_iter=200),
    param_grid,
    cv=5,
    scoring='accuracy'
)

grid_search.fit(X_train_scaled, y_train)

print("Best Parameters:", grid_search.best_params_)
print("Best Score:", grid_search.best_score_)

# Use best model
best_model = grid_search.best_estimator_
y_pred_best = best_model.predict(X_test_scaled)
print("Test Accuracy:", accuracy_score(y_test, y_pred_best))

✅ Advantages & Disadvantages

✅ Advantages

  • Simple and interpretable
  • Fast training and prediction
  • Outputs probabilities
  • Works well with linearly separable data
  • Regularization prevents overfitting
  • Handles multi-class classification

❌ Disadvantages

  • Assumes linear decision boundary
  • Struggles with non-linear relationships
  • Sensitive to outliers
  • Requires feature scaling
  • May underfit complex patterns
  • Assumes feature independence

🎯 When to Use Logistic Regression