Rob,
>> Mine used their dlls, though I (perhaps unnecessarily) rebuilt the import libs from those dlls using gendef and dlltool.
I finally got everything to compile and link, but I could not make it work with static. I had to use --enabled-shared. Why would anybody prefer shared over static for, say, ImageMagick?
When you are dashing off to work in the morning, who would want to scour the neighborhood for 4 matching tires, pressurize them uniformly, jack up the car, torque them to spec, put the tools away and then, after half an hour of needless rigmarole, you're finally off! Why?
In the horrible, old dos days when you had to operate in a Megabyte or less, this made sense. Today, breaking find.exe into a 64k .exe and 3 DLLs totaling 4.26MB seems anachronistic.
With disk space going for $150 per 4 TERABYTES, a 100% disk savings on 4 Mb is worth:
4E6B * $150 / 4E12B = $0.00015.
If you have 7000 dlls, your savings could approach a whole DOLLAR! All of cygwin that I used had 331 DLLs totaling 75MB so they saved less than ¢1/3. The space saving angle is PREPOSTEROUS on the face of it!
How much memory are you saving when you load a DLL containing 100 functions when you will only use 3 of them, with each costing a page fault? Caching code you will never use is a 100% waste. Seeking your heads all over town scavenging scattered DLLs while reading ahead MBs of stuff you will never use is another titanic cache miss! How can you beat 1 seek, 1 read, a tiny amount of excess read ahead and being immediately ready for action?
But if you already a module have loaded into memory, you can share the same image? It's like having 5 kids and one set of toys and arbitrating who will get which toy and when. Buy each kid their own toy for $5 each and forget about the critical regions, setting and checking semaphores, locking and unlocking, rounding of robins, mutual assured exclusions, mutating mutexes, etc.
There is a case when using a library provided by a third party where a DLL is efficient, but when you have all of the code right there, chopping it into tiny chunks and assembling it just before you need it seems like the pinnacle of inefficiency.
Time is $$. My main use case is crunching 219MB of UINT48, raw, RGB data AFAP. Paying a 5% - 20% performance penalty to save a MB of memory seems absurd unless you are on a tiny device with extremely limited resources. I have 32GB already paid for so saving a meg or 2 saves me absolutely nothing.
I must be missing something here.
B