laziness, impatience, and hubris | |
PerlMonks |
Re^10: let Makefile.PL to do the Readme file for me -- new target?by afoken (Chancellor) |
on Jan 20, 2021 at 23:01 UTC ( [id://11127171]=note: print w/replies, xml ) | Need Help?? |
This must be error prone. Understanding that is a little bit hard at first if you have a DOS / Windows background, or at least it was for me. DOS was a terrible environment. Shelling out from any program larger than hello world ate lots of memory, because DOS could not swap out the now waiting program. It stayed in memory, and memory was a scare resource (no, you could not run programs from EMS or XMS). Yes, there were tricks to have more memory available from subshells, but in general, you avoided shelling out. You generally learned quite fast that almost all programs you tried to invoke from your program failed due to low memory. Unix shells out all the time, sometimes you don't even know. Unix also grew on memory-limited machines, but swapping out unused programs became standard very early in the development. The command interpreter (command.com) sucked big time. Trying to do more than running a few programs in sequence requires a lot of patience, deep knowledge of the bugs in the command interpreter, and how to use them. Having a few spare chickens to sacrifice also was helpful. Also, passing arguments to programs was severely limited - no more than 126 bytes for program and arguments, and all arguments are passed as a single string. So, if you need to pass more than a few options, you would write them to a file and pass the name of that file to the program. Again, Unix was and still has the better design: Pass an array of strings, and have the invoking shell prepare that array in the same way for all invoked programs, so that no program (except for the shell) has to worry about variable expansion, resolving * and ? in path names, and so on. Oh, environment variables. 256 bytes by default, about half of that was needed for PATH. Memory size is fixed, set during boot from a parameter passed to command.com. You would not use environment variables if you did not have to. Unix uses environment variables all the time, and many shells allow to set them temporarily (e.g. PERLIO_DEBUG=/tmp/log perl test.pl). And while pipes look like on Unix, they aren't anything like on Unix. DOS pipes are a feature of the command interpreter, not of the operating system, and are implemented as simple redirections to a temporary file. The real DOS also could not redirect STDERR, another limitation of the command interpreter. So, you were effectively forced to do everything in one big program. At best, you could reuse some existing source code. You were effectively trained not to call existing programs (due to the memory limits). Now imagine coming from that environment to Unix, a multi-user, multi-tasking, system with a real memory management and a powerful shell that can do tons of things you could not even dream of when using command.com. It is a culture shock. Things don't seem that bad when you have a first look at Windows, at least you get a working memory management. And while no supported version of Windows is based on DOS any more, the NT-based Windows versions, like OS/2, still inherited a lot of concepts from DOS. Windows still has no fork() and exec() (instead, there are about 10 API functions with up to about 30 parameters), batch files still emulate bugs from ancient DOS versions, and add a lot of new quirks, program parameters are still passed as a single string, and so on. Windows does not impose as much limits on programs as DOS did, but the DOS limits have survived as part of the DOS/Windows culture. You still rarely use the environment, you don't use pipes between programs to process a few megabytes of data (text or binary), and you rarely invoke other programs from your program. Alexander
-- Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
In Section
Seekers of Perl Wisdom
|
|