Kernel’s original recipe

I really should write these brain waves down before I forget. First of all an unspoken assumption in statistical signal processing is that the signal is normalized before you do any thing. The variance of the noise will be some fractional value. I spent an entire week puzzling out how to make Steins Unbiased Risk Estimate work with a large signal amplitude (15-30) and simulated noise variance = 1, 2, 3.

I got estimates that were too big, which by itself is fine,  a scaling problem only. But the position of the minimum wouldn’t agree with the MSE. Worse, the result was unstable and changed from one run to the next.

Then the idea came that I should normalize the signal first. Only then did the SURE work as expected, finding the optimal width of the Gaussian kernel used for smoothing the test signal. It also works with the rectangular kernel.

It’s somewhat frustrating that the tutorials don’t mention normalizing the signal. Maybe it’s a given for people who have done statistics, but my physics training has been mostly deterministic. I regarded the statistical parts as being messy add ons.

I will try to implement SURE for other convolution kernels.

There was an idea that came an hour ago but since I didn’t write it down, alas it fled from me. Senior moment. Hopefully the idea will return.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s