I just finished upgrading our large and complex Perl-based internal production system
to run under a newer version of Linux which comes with a newer Perl and other newer
Yes, we face a similar problem across many different Unix flavours.
We don't use the system Perl on any platform though, always build our own Perl from C sources.
But yes it's a big and hairy problem which is why we're gonna do it early
in the release cycle to allow plenty of time for flushing out obscure bugs.
Unfortunately, we've got pretty poor test coverage on much of our code, so
we'll need to do quite a bit of manual testing.
BTW, I was flabbergasted to hear Titus Winters in
C++ as a Live at Head Language
claim that Google have a single C++ code repository, shared across the whole company, containing mega millions
of lines of code and that they always "Live at Head", meaning that everyone is
always using the latest version of all code ... so they never do "upgrades"!
As you might expect, to pull this off, you need strong discipline and excellent test coverage,
combined with very sophisticated automated tools.
Update: Some points from Titus Winters talk:
- Programming ("Hey, I got my thing to work!") vs Engineering ("What happens when my code needs to live a long time?").
- Engineering is Programming integrated over time.
- SemVer proved inadequate at google (it over-simplifies and over-constrains). SemVer summary: given a version number MAJOR.MINOR.PATCH, increment the: MAJOR version when you make incompatible API changes; MINOR version when you add functionality in a backwards compatible manner; PATCH version when you make backwards compatible bug fixes (additional labels for pre-release and build metadata are available as extensions).
- Mentions the dreaded diamond dependency and that dependency graphs grow quadratically.