http://qs321.pair.com?node_id=1159808


in reply to Re: How would you code this?
in thread How would you code this?

Graff. You've certainly given me considerable food for thought. And a bunch of extra work to do. So "Thank you" and "thaaaanks a bunch" :)

Here is a composite zoom on the 5 differences between your output and mine.

One of, if not the, primary goals of using a discrete filter algorithm, rather than a continuous fitting or smoothing algorithm, is the desire to retain as much of the actual data as possible. Continuous algorithms like moving average, 3-point median and Loess, all have affects upon all the points, not just those at and around the discontinuities; often inventing new points, and subtly shifting existing points well away from the discontinuities; and they also don't guarantee to remove all inflections.

Upshot: I'm going to have to run your algorithm against mine on a few much larger samples and check to ensure that there are no significant downsides to your algorithm. If not, I will be dumping mine in favour of yours.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^3: How would you code this?
by graff (Chancellor) on Apr 07, 2016 at 14:04 UTC
    I noticed that some x values are repeated up to five times, and that the selection of a trajectory through such a region (depending on what the algorithm does) could vary between:
    c c c vs b vs b b a a a
    (i.e. after dropping some number of points from the input series, the middle point either falls midway between the other two, or creates an "elbow" close to the previous or following point).

    I thought about working out a way to maintain a stack, such that from the first (bottom) to the last (top) stack element, there would be at most three distinct x values (that seems to be the extent of the "oscillation" you're seeing).

    Once a fourth x value appears in the next point, review the stack and output the points that comprise the smoothest trajectory. But that's a lot of work for a potentially insignificant gain in "accuracy".

    UPDATE: As for your "primary" goal of "preserving the original data" as much as possible, as opposed to creating a smoothed sequence with adjustments affecting all data points: given that the input appears to be quantized, any desire to preserve that original data is actually going to preserve the nature of the measuring device, rather than the nature of the thing being measured. Maybe Certainly there's value in that, but depending on what your downstream processes are supposed to do with the data, you might rather let those processes get a "fully smoothed" version of the data, because this represents a presumably reasonable fiction, as opposed to the quantized, jagged fiction being created by your measuring device.

    Another update: obviously, by preserving the original data - but not necessarily using it as-is for certain downstream processes - you'll get to do comparisons if/when you use a different device (or a different configuration of the current device).

      given that the input appears to be quantized,

      I agree the data is quantized. It appears to be, (still waiting for confirmation from the equipment manufacturer), an artifact of the digitisation of the analogue values produced by the sensing device.

      depending on what your downstream processes are supposed to do with the data,

      The cleanup is required because the downstream processing -- FEM software -- interpolates between the supplied values using a cubic spline interpolation, as the simulation converges. Thus, it requires that the input data be monotonic in order that it can produce a 'single-valued cubic spline fit'.

      My choice to avoid producing a "fully smoothed" fit, is because I've seen bad interactions between pre-processed fitting, and the fitting done internally by the software. These manifest themselves as interminable oscillations in the Newton-Raphson iterations resulting in extremely extended run times.

      you'll get to do comparisons if/when you use a different device (or a different configuration of the current device).

      The data is supplied to me; I only get one dataset per sample, but lots of (physically and chemically) different samples. I have no control over how it is produced.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
      In the absence of evidence, opinion is indistinguishable from prejudice.
        <Insert obligatory critique of cubic spline interpretation here />

        Are you confident that the badly behaved regions of the dataset are bad discrete measurements as opposed to something that should be treated with a noise model? Do you have a physical understanding of what happens when the sensor generates an aberrant series?


        #11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.