I am hoping that
etj will come by and make this more idiomatic PDL. But given that I'm a PDL hack, this is what I can come up with, by taking the approximate coordinates from the image supplied and doing some manual math using PDL:
#!perl -l
use 5.012; # //, strict, say
use warnings;
use PDL;
use PDL::Matrix;
print my $red_in = (mpdl [[58,48], [108,155], [186,80], [255,191], [3
+31,48]])->transpose->append(ones(1));
print my $red_out = (mpdl [[471,15], [531,141], [603,90], [682,227], [
+747,107]])->transpose->append(ones(1));
print my $blue_in = (mpdl [[125,73], [197,158], [282,94]])->transpose-
+>append(ones(1));
print my $blue_out_for_checking = (mpdl [[542,64], [622,175], [701,138
+]])->transpose->append(ones(1));
print for
my $sx = sum( $red_in->slice(0) ),
my $sy = sum( $red_in->slice(1) ),
my $s1 = sum( $red_in->slice(2) ),
my $sxx = sum( $red_in->slice(0)**2 ),
my $syy = sum( $red_in->slice(1)**2 ),
my $sxy = sum( $red_in->slice(0)*$red_in->slice(1) ),
my $sv = sum( $red_out->slice(0) ),
my $sw = sum( $red_out->slice(1) ),
my $sxv = sum( $red_in->slice(0)*$red_out->slice(0) ),
my $sxw = sum( $red_in->slice(0)*$red_out->slice(1) ),
my $syv = sum( $red_in->slice(1)*$red_out->slice(0) ),
my $syw = sum( $red_in->slice(1)*$red_out->slice(1) ),
;
print my $G = mpdl
[
[ $sxx, $sxy, $sx ],
[ $sxy, $syy, $sy ],
[ $sx, $sy, $s1 ],
];
print my $Q = mpdl
[
[ $sxv, $sxw ],
[ $syv, $syw ],
[ $sv, $sw ],
];
print my $P = inv($G) x $Q;
print my $abcdef001 = ($P->transpose)->append(zeros(1))->set(2,2,1);
print my $blue_out_calc = $abcdef001 x $blue_in;
print $blue_out_calc - $blue_out_for_checking; # no more than one pixe
+l off
__END__
I am not a PDL expert, so I am sure there's a way with its linear alge
+bra to skip doing all the math.
But I enjoy the math derivation.
You have a from IN to OUT,
IN and OUT are of the form where each point #i is a column vector of t
+he form
[ x_i ]
[ y_i ]
So for each point, you can think of it as OUT = M * IN + T
OUT = [ a b ] . IN + [ c ]
[ d e ] [ f ]
But if you want to work with all of IN and OUT at once, rather than in
+dividually (embrace the power of the matrix),
you can pad it out into a single multiplication as
OUT = [ a b c ] [ x1 x2 x3 ... xN ] = [ v1 v2 v3 ... vN ]
[ d e f ] * [ y1 y2 y3 ... yN ] [ w1 w2 w3 ... wN ]
[ 0 0 1 ] [ 1 1 1 ... 1 ] [ 1 1 1 ... 1 ]
So if you want to figure out the expanded M, you want to best fit on O
+UT = M * IN (with the extra rows)
To figure out the best fit, you basically have the equations
v_i = a*Xi + b*Yi + c*1
w_i = d*Xi + e*Yi + f*1
Best fit is least-sum-squared-error, so
err_v_i = a*Xi + b*Yi + c*1 - Vi
err_w_i = d*Xi + e*Yi + f*1 - Wi
And the sq error
esq_Vi = a²*Xi² + b²*Yi² + c²*1 + Vi² + 2ab*Xi*Yi + 2ac*Xi - 2a*Xi
+*Vi + 2bc*Yi - 2b*Yi*Vi - 2c*Vi
esq_Wi = exact same form (I will just show derivation on the Vi te
+rms for now)
Sum up the total error:
SSEv = sum(a²*Xi² + b²*Yi² + c²*1 + Vi² + 2ab*Xi*Yi + 2ac*Xi - 2a*
+Xi*Vi + 2bc*Yi - 2b*Yi*Vi - 2c*Vi)
= sum(a²*Xi²) + sum(b²*Yi²) + sum(c²*1) + sum(Vi²) + sum(2ab*
+Xi*Yi) + sum(2ac*Xi) - sum(2a*Xi*Vi) + sum(2bc*Yi) - sum(2b*Yi*Vi) -
+sum(2c*Vi)
= a²*sum(Xi²) + b²*sum(Yi²) + c²*sum(1) + sum(Vi²) + 2ab*sum(
+Xi*Yi) + 2ac*sum(Xi) - 2a*sum(Xi*Vi) + 2bc*sum(Yi) - 2b*sum(Yi*Vi) -
+2c*sum(Vi)
Need to find the minimum with respect to each of a, b, c (and equivale
+ntly d,e,f) by setting each partial derivative to 0
dSSEv/da = 2a*sum(Xi²) + 0 + 0 + 0 + 2b*sum(Xi*Yi) + 2c*sum(Xi) -
+2*sum(Xi*Vi) + 0 - 0 - 0 = 0
dSSEv/db = 0 + 2b*sum(Yi²) + 0 + 0 + 2a*sum(Xi*Yi) + 0 - 0 + 2c*su
+m(Yi) - 2*sum(Yi*Vi) - 0 = 0
dSSEv/dc = 0 + 0 + 2c*sum(1) + 0 + 0 + 2a*sum(Xi) - 0 + 2b*sum(Yi)
+ - 0 - 2*sum(Vi) = 0
Divide by two and rearrange those three, grouping into a,b,c on one si
+de and coefficientless on the other
dSSEv/da: a*sum(Xi²) + b*sum(Xi*Yi) + c*sum(Xi) = sum(Xi*Vi)
dSSEv/db: a*sum(Xi*Yi) + b*sum(Yi²) + c*sum(Yi) = sum(Yi*Vi)
dSSEv/dc: a*sum(Xi) + b*sum(Yi) + c*sum(1) = sum(Vi)
But that is a matrix multiplication:
[ sum(Xi²) sum(Xi*Yi) sum(Xi) ] [ a ] = [ sum(Xi*Vi)
+ ]
[ sum(Xi*Yi) sum(Yi²) sum(Yi) ] [ b ] = [ sum(Yi*Vi)
+ ]
[ sum(Xi) sum(Yi) sum(1) ] [ c ] = [ sum(Vi)
+ ]
And similarly from the esq_Wi, you get:
[ sum(Xi²) sum(Xi*Yi) sum(Xi) ] [ d ] = [ sum(Xi*Wi)
+ ]
[ sum(Xi*Yi) sum(Yi²) sum(Yi) ] [ e ] = [ sum(Yi*Wi)
+ ]
[ sum(Xi) sum(Yi) sum(1) ] [ f ] = [ sum(Wi)
+ ]
Note that the matrix on the left is the same for both. So we can merg
+e those two into a single matrix multiplication
[ sum(Xi²) sum(Xi*Yi) sum(Xi) ] [ a d ] = [ sum(Xi*V
+i) sum(Xi*Wi) ]
[ sum(Xi*Yi) sum(Yi²) sum(Yi) ] [ b e ] = [ sum(Yi*V
+i) sum(Yi*Wi) ]
[ sum(Xi) sum(Yi) sum(1) ] [ c f ] = [ sum(Vi)
+ sum(Wi) ]
If we call those G x P = Q, then we can solve for the parameter matrix
+ P by P = inv(G) x Q
But you can get the original abcdef001 matrix by transposing P and add
+ing a row of (001)
Then to find the blue outputs, just do
BLUE_OUT = abcdef001 * BLUE_IN
Edit: when I said " # no more than one pixel off" , that was not meant as a guarantee; just that this example was only approximately 1 pixel from the supplied locations