Taming Infinity: Regularizing Generalized Derivatives with Distribution Theory & SageMath
- Get link
- X
- Other Apps
Taming Infinity: Derivatives of Generalized Functions and SageMath Magic
Why Should We Care About Generalized Derivatives?
What happens when calculus breaks?
Imagine you're working with \( ln(x) \), and you try to differentiate it at or near zero. Classical calculus screams: "Nope, that’s divergent!"
But physics doesn’t stop for undefined math. Quantum field theory, signal processing, and PDEs all need to work with such functions.
So what do we do?
We turn to distribution theory, a mathematical framework where functions like \( ln(x) \) can be differentiated meaningfully—but only in a smarter way.
The Mysterious Function: \( ln(x)_+ \)
Let’s explore this function:
\[ \ln(x)_{+} =
\begin{cases}
0, & x < 0 \\
\ln(x), & x > 0
\end{cases} \] If we try to differentiate it in the classical sense, we hit a wall: \[ \left\langle (\ln x_{+})', \varphi \right\rangle = -\int_{0}^{\infty} \ln(x) \varphi'(x) \, dx
\] This blows up at π₯=0 because \( \ln(x) \to -\infty
\).
So... how do we make sense of it?
Regularization: The Art of Making the Infinite Finite
To save this integral, we subtract off the part that causes the blow-up.
We know that test functions \( \varphi(x) \) are smooth, so: \[ \varphi(x) \approx \varphi(0) + \text{higher order terms} \] If we subtract \( \varphi(0) \theta(1 - x) \) , we neutralize the singularity: \[ \left\langle (\ln x_{+})', \varphi \right\rangle = \int_{0}^{\infty} \frac{\varphi(x) - \varphi(0) \theta(1 - x)}{x} \, dx
\]
Intuition Break: What Are We Doing Here?
We're essentially saying:
- "Let's take out the infinite spike near zero by subtracting the smooth 'flat' part of the test function that doesn't decay fast enough."
This trick is widely used in quantum field theory, where divergent integrals are everywhere. The idea is to renormalize—make sense of the infinite by modifying the setup cleverly.
Visualizing the Regularization Effect
Let’s plot how \(\varphi(x) \) and \( \varphi(x)- \varphi(0) \theta(1 - x) \) look.
# Setup
x = var('x')
phi_fn = lambda x: exp(-x^2) # Smooth test function
phi0 = phi_fn(0)
theta = piecewise([[(0, 1), 1], [(1, +Infinity), 0]]) # Corrected theta function
reg_phi = lambda x: phi_fn(x) - phi0 * theta(x)
# Plot
plot1 = plot(phi_fn(x), (x, 0, 3), color='blue', legend_label='phi(x)')
plot2 = plot(reg_phi(x), (x, 0, 3), color='red', legend_label='Regularized phi(x)')
show(plot1 + plot2)
π‘ Try It Yourself! Now You can copy and paste directly into here Run SageMath Code Here
Notice how the red curve dips to zero at the origin—that's the divergence being neutralized.
SageMath Walkthrough: Step-by-Step
Step 1: Define the Regularized Integrand
import sympy as sp
x, epsilon = sp.symbols('x epsilon', real=True, positive=True)
# Define test function and Heaviside function
phi = sp.exp(-x**2) # Example smooth function
phi0 = phi.subs(x, 0) # Evaluate at x=0
theta = sp.Piecewise((1, x < 1), (0, True)) # Proper piecewise definition
# Define regularized integrand with epsilon shift to remove singularity
f_reg = (phi - phi0 * theta) / (x + epsilon**2)
# Display the regularized function
sp.pprint(f_reg)
Step 2: Perform Symbolic Integration
I = sp.integrate(f_reg, (x, epsilon, sp.oo))
sp.pprint(I)
Step 3: Take the Limit as π→0
epsilon_val = 1e-6
approx_limit_I = I.subs(epsilon, epsilon_val).evalf()
sp.pprint(approx_limit_I)
Step 4: Try with a Concrete Test Function (Numerical Integration)
import numpy as np
from scipy.integrate import quad
# Define the numerical function
def f_numeric(x, eps=1e-6):
return (np.exp(-x**2) - np.exp(0) * (1 if x < 1 else 0)) / (x + eps**2)
# Perform numerical integration
numerical_result, error = quad(f_numeric, 1e-6, 10)
print("Numerical Integration:", numerical_result)
Step 5: Validate Integral Convergence for Different ( \epsilon ) Values
eps_values = [1e-3, 1e-6, 1e-9, 1e-12, 1e-15]
for eps in eps_values:
result, _ = quad(lambda x: f_numeric(x, eps), 1e-6, 10)
print(f"Epsilon = {eps}, Integral = {result}")
Step 6: Visualize Integral Stability vs. ( \epsilon )
import matplotlib.pyplot as plt
eps_values = [1e-3, 1e-6, 1e-9, 1e-12, 1e-15]
integral_results = [-0.2886079600093639, -0.2886087324544529, -0.2886087324552253, -0.2886087324552253, -0.2886087324552253]
plt.plot(eps_values, integral_results, marker='o', linestyle='-', color='red')
plt.xscale("log")
plt.xlabel("Epsilon")
plt.ylabel("Integral Value")
plt.title("Regularized Integral vs. Epsilon")
plt.grid()
plt.show()
π‘ Try It Yourself! Now You can copy and paste directly into here Run SageMath Code Here
Beyond This Blog: Physics and Green's Functions
This regularization trick shows up in:
- Quantum Electrodynamics (QED): where divergent integrals are tamed with renormalization.
- Green’s functions in PDEs: where singularities like \( 1/π₯ \) and \( ln∣π₯∣ \) arise naturally.
- Hilbert transforms and signal analysis: principal value integrals depend on similar ideas.
Next upcoming
Next time, we’ll tackle: \[ \int_{-\infty}^{\infty} \frac{\phi(x)}{x} , dx \] This integral is divergent symmetrically, and its distributional derivative leads us to the powerful concept of the Cauchy principal value.
Stay tuned—and bring your infinities. We’re not done yet.
- Get link
- X
- Other Apps
Comments
Post a Comment
If you have any queries, do not hesitate to reach out.
Unsure about something? Ask away—I’m here for you!