Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re: Abstract image registration or feature detection

by pryrt (Abbot)
on Jul 02, 2022 at 18:14 UTC ( [id://11145244]=note: print w/replies, xml ) Need Help??


in reply to Abstract image registration or feature detection [UPDATED w examples]

I am hoping that etj will come by and make this more idiomatic PDL. But given that I'm a PDL hack, this is what I can come up with, by taking the approximate coordinates from the image supplied and doing some manual math using PDL:

#!perl -l use 5.012; # //, strict, say use warnings; use PDL; use PDL::Matrix; print my $red_in = (mpdl [[58,48], [108,155], [186,80], [255,191], [3 +31,48]])->transpose->append(ones(1)); print my $red_out = (mpdl [[471,15], [531,141], [603,90], [682,227], [ +747,107]])->transpose->append(ones(1)); print my $blue_in = (mpdl [[125,73], [197,158], [282,94]])->transpose- +>append(ones(1)); print my $blue_out_for_checking = (mpdl [[542,64], [622,175], [701,138 +]])->transpose->append(ones(1)); print for my $sx = sum( $red_in->slice(0) ), my $sy = sum( $red_in->slice(1) ), my $s1 = sum( $red_in->slice(2) ), my $sxx = sum( $red_in->slice(0)**2 ), my $syy = sum( $red_in->slice(1)**2 ), my $sxy = sum( $red_in->slice(0)*$red_in->slice(1) ), my $sv = sum( $red_out->slice(0) ), my $sw = sum( $red_out->slice(1) ), my $sxv = sum( $red_in->slice(0)*$red_out->slice(0) ), my $sxw = sum( $red_in->slice(0)*$red_out->slice(1) ), my $syv = sum( $red_in->slice(1)*$red_out->slice(0) ), my $syw = sum( $red_in->slice(1)*$red_out->slice(1) ), ; print my $G = mpdl [ [ $sxx, $sxy, $sx ], [ $sxy, $syy, $sy ], [ $sx, $sy, $s1 ], ]; print my $Q = mpdl [ [ $sxv, $sxw ], [ $syv, $syw ], [ $sv, $sw ], ]; print my $P = inv($G) x $Q; print my $abcdef001 = ($P->transpose)->append(zeros(1))->set(2,2,1); print my $blue_out_calc = $abcdef001 x $blue_in; print $blue_out_calc - $blue_out_for_checking; # no more than one pixe +l off
__END__ I am not a PDL expert, so I am sure there's a way with its linear alge +bra to skip doing all the math. But I enjoy the math derivation. You have a from IN to OUT, IN and OUT are of the form where each point #i is a column vector of t +he form [ x_i ] [ y_i ] So for each point, you can think of it as OUT = M * IN + T OUT = [ a b ] . IN + [ c ] [ d e ] [ f ] But if you want to work with all of IN and OUT at once, rather than in +dividually (embrace the power of the matrix), you can pad it out into a single multiplication as OUT = [ a b c ] [ x1 x2 x3 ... xN ] = [ v1 v2 v3 ... vN ] [ d e f ] * [ y1 y2 y3 ... yN ] [ w1 w2 w3 ... wN ] [ 0 0 1 ] [ 1 1 1 ... 1 ] [ 1 1 1 ... 1 ] So if you want to figure out the expanded M, you want to best fit on O +UT = M * IN (with the extra rows) To figure out the best fit, you basically have the equations v_i = a*Xi + b*Yi + c*1 w_i = d*Xi + e*Yi + f*1 Best fit is least-sum-squared-error, so err_v_i = a*Xi + b*Yi + c*1 - Vi err_w_i = d*Xi + e*Yi + f*1 - Wi And the sq error esq_Vi = a²*Xi² + b²*Yi² + c²*1 + Vi² + 2ab*Xi*Yi + 2ac*Xi - 2a*Xi +*Vi + 2bc*Yi - 2b*Yi*Vi - 2c*Vi esq_Wi = exact same form (I will just show derivation on the Vi te +rms for now) Sum up the total error: SSEv = sum(a²*Xi² + b²*Yi² + c²*1 + Vi² + 2ab*Xi*Yi + 2ac*Xi - 2a* +Xi*Vi + 2bc*Yi - 2b*Yi*Vi - 2c*Vi) = sum(a²*Xi²) + sum(b²*Yi²) + sum(c²*1) + sum(Vi²) + sum(2ab* +Xi*Yi) + sum(2ac*Xi) - sum(2a*Xi*Vi) + sum(2bc*Yi) - sum(2b*Yi*Vi) - +sum(2c*Vi) = a²*sum(Xi²) + b²*sum(Yi²) + c²*sum(1) + sum(Vi²) + 2ab*sum( +Xi*Yi) + 2ac*sum(Xi) - 2a*sum(Xi*Vi) + 2bc*sum(Yi) - 2b*sum(Yi*Vi) - +2c*sum(Vi) Need to find the minimum with respect to each of a, b, c (and equivale +ntly d,e,f) by setting each partial derivative to 0 dSSEv/da = 2a*sum(Xi²) + 0 + 0 + 0 + 2b*sum(Xi*Yi) + 2c*sum(Xi) - +2*sum(Xi*Vi) + 0 - 0 - 0 = 0 dSSEv/db = 0 + 2b*sum(Yi²) + 0 + 0 + 2a*sum(Xi*Yi) + 0 - 0 + 2c*su +m(Yi) - 2*sum(Yi*Vi) - 0 = 0 dSSEv/dc = 0 + 0 + 2c*sum(1) + 0 + 0 + 2a*sum(Xi) - 0 + 2b*sum(Yi) + - 0 - 2*sum(Vi) = 0 Divide by two and rearrange those three, grouping into a,b,c on one si +de and coefficientless on the other dSSEv/da: a*sum(Xi²) + b*sum(Xi*Yi) + c*sum(Xi) = sum(Xi*Vi) dSSEv/db: a*sum(Xi*Yi) + b*sum(Yi²) + c*sum(Yi) = sum(Yi*Vi) dSSEv/dc: a*sum(Xi) + b*sum(Yi) + c*sum(1) = sum(Vi) But that is a matrix multiplication: [ sum(Xi²) sum(Xi*Yi) sum(Xi) ] [ a ] = [ sum(Xi*Vi) + ] [ sum(Xi*Yi) sum(Yi²) sum(Yi) ] [ b ] = [ sum(Yi*Vi) + ] [ sum(Xi) sum(Yi) sum(1) ] [ c ] = [ sum(Vi) + ] And similarly from the esq_Wi, you get: [ sum(Xi²) sum(Xi*Yi) sum(Xi) ] [ d ] = [ sum(Xi*Wi) + ] [ sum(Xi*Yi) sum(Yi²) sum(Yi) ] [ e ] = [ sum(Yi*Wi) + ] [ sum(Xi) sum(Yi) sum(1) ] [ f ] = [ sum(Wi) + ] Note that the matrix on the left is the same for both. So we can merg +e those two into a single matrix multiplication [ sum(Xi²) sum(Xi*Yi) sum(Xi) ] [ a d ] = [ sum(Xi*V +i) sum(Xi*Wi) ] [ sum(Xi*Yi) sum(Yi²) sum(Yi) ] [ b e ] = [ sum(Yi*V +i) sum(Yi*Wi) ] [ sum(Xi) sum(Yi) sum(1) ] [ c f ] = [ sum(Vi) + sum(Wi) ] If we call those G x P = Q, then we can solve for the parameter matrix + P by P = inv(G) x Q But you can get the original abcdef001 matrix by transposing P and add +ing a row of (001) Then to find the blue outputs, just do BLUE_OUT = abcdef001 * BLUE_IN


Edit: when I said " # no more than one pixel off" , that was not meant as a guarantee; just that this example was only approximately 1 pixel from the supplied locations

Replies are listed 'Best First'.
Re^2: Abstract image registration or feature detection
by etj (Deacon) on Jul 05, 2022 at 21:22 UTC
    Ironically given your kind words, and as proved by some emails just today on the PDL mailing lists, I am not at all a linear-algebra expert. However, if you're trying to solve linear least-squares systems, then check out mlls (using QR or LQ factorisation) and mllss (using SVD) in PDL::LinearAlgebra. If you want to try Levenberg-Marquardt, check out PDL::Fit::Levmar.

    One other observation for more idiomatic PDL usage is that if you're taking several slices (e.g. your sx, sy, s1) and doing the same to each, it'll be quicker to do something like:

    my ($sx, $sy, $s1) = $red_in->slice('0:2')->mv(0,-1)->sumover->dog;
Re^2: Abstract image registration or feature detection
by kikuchiyo (Hermit) on Jul 04, 2022 at 18:38 UTC

    Thanks!

    This looks interesting, but unfortunately it dies for me with Dim mismatch in matmult of 3x3 x 3x2: 3 != 2 at the print my $P = inv($G) x $Q; line. PDL version 2.057 on Perl 5.34.1, if that matters.

      Weird, perl 5.30.0 with PDL 2.019 (which is the combo that shipped with Strawberry Perl's PDL edition) doesn't give that error.

      These are the outputs for my run (adding a print "perl version $]";print "PDL version $PDL::VERSION"; at the beginning to print the versions):

      Mathematically speaking, a [3x3] x [3x2] is the right balance of dimensions for a successful matrix multiplication (resulting in a [3x2] matrix). The use of PDL::Matrix used to be the way to make PDL match mathematical order for the dimensions, but maybe they've changed that between 2.019 and 2.057. (I don't think I can test, because the Strawberry Perl PDL setup is rather dependent on the libraries that Strawberry bundles, and I don't know if upgrading PDL will work... maybe I'll try in a sandbox to avoid ruining my "main" installation).

      I tried getting rid of the use PDL::Matrix, but then that gets rid of the mpdl command, causing errors during perl's compile stage, and I'm not sure what hoops I would need to jump through to make my script compatible with both 2.019 and 2.057.

        Okay, 5.30.2 & PDL 2.021 and 5.32.1 & PDL 2.025 (the last pre-bundled Perl+PDL from Strawberry) both worked like my 5.30.0 & PDL 2.019.

        But I was able to install PDL 5.080 with Perl 5.32.1, to get slightly newer PDL but slightly older Perl than you. But it showed the same dimensional difference.

        So I did some debug to make it work without PDL::Matrix

        This shows the 12-ish pixel error, but will hopefully work with your 2.057 (and definitely worked with my 2.080).

        Minimal pair:

        perl -MPDL -MPDL::Matrix -e '$m = mpdl [[2,0],[0,2]]; $v = vpdl [1,2]; + print $m x $v' [ [2] [4] ]

        vs.

        perl -MPDL -MPDL::Matrix -e '$m = mpdl [[2,0],[0,2]]; $v = vpdl [1,2]; + print inv($m) x $v' Dim mismatch in matmult of [2x2] x [2x1]: 2 != 1 at /usr/lib64/perl5/v +endor_perl/PDL/Primitive.pm line 274. PDL::matmult(PDL=SCALAR(0x5600a82130e0), PDL::Matrix=SCALAR(0x5600 +a8202ba0), PDL=SCALAR(0x5600a8d718c8)) called at /usr/lib64/perl5/ven +dor_perl/PDL/Primitive.pm line 31 PDL::__ANON__(PDL=SCALAR(0x5600a82130e0), PDL::Matrix=SCALAR(0x560 +0a8202ba0), "") called at -e line 1

        This looks like a bug in PDL::Matrix, specifically with inv.

        The way to avoid needing mpdl is to transpose your data, and use pdl, and I would recommend doing so. But as observed elsewhere, there are almost certainly out-of-the-box solutions to this particular problem.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11145244]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (7)
As of 2024-04-23 13:34 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found