printf "%2.15f", 44123.200/959.2;
The output, due to rounding errors inherent in any system where base 10 floating point numbers are converted to base 2 for arithmetic work, and then converted back to base 10 for display, is "45.999999999999993".
Unless you tell it to do otherwise (as I have done with printf, Perl, like most languages rounds that to 46 for display purposes (I can't remember the number of decimal digits Perl will display before rounding). But int does something entirely different from rounding; it simply drops everything after the decimal point. So 45.9999999999999993 gets truncated by int to 45, while printing 45.99999999999993 gets rounded up to 46.
I first encountered this fact of life back in my first high school Computer Science course in 1983 or so, taught on Apple II+ computers with some sort of Apple floating point BASIC. But it existed long before 1983, and will continue to exist as long as we use a finite number of base2 digits to internally represent (and perform operations on) floating point base10 numbers.
