Re: Directory Recursion Disorder
by pzbagel (Chaplain) on Jun 23, 2003 at 07:30 UTC
|
if (-d "$directory/$dir")
HTH | [reply] [d/l] |
|
| [reply] |
Re: Directory Recursion Disorder
by barbie (Deacon) on Jun 23, 2003 at 08:46 UTC
|
You want to avoid overkill, but without File::Find. Does that include File::Find::Rule too? If not take a look it, as it does what you want ... all in one line (okay 2 if you include the use ;) ). From the synopsis:
use File::Find::Rule;
# find all the subdirectories of a given directory
my @subdirs = File::Find::Rule->directory->in( $directory );
Then to print use something like:
print join("\n",@subdirs);
--
Barbie | Birmingham Perl Mongers | http://birmingham.pm.org/ | [reply] [d/l] [select] |
|
use File::Spec;
use Text::Glob 'glob_to_regex';
use Number::Compare;
use Carp qw/croak/;
use File::Find (); # we're only wrapping for now
use Cwd; # 5.00503s File::Find goes screwy with max_depth ==
+ 0
I try to avoid such overkill when I'm just recursing down 2 directories and throwing the structure in a HoA. Depending on 5 extra modules for a seemingly simple task gives me nightmares. Thanks for the suggestion though, I'll make sure to check it in a bit more depth when I overcome my paranoia ;-). | [reply] [d/l] |
|
I try to avoid such overkill when I'm just recursing down 2 directories and throwing the structure in a HoA.
You keep saying overkill. You said it about File::Find itself, and you say it here. I think its a bad argument. File::Find uses Carp, Exporter and Cwd either directly or indirectly. Exporter gets included with most modules, Cwd is pretty common and even still isnt that big, Carp is like Exporter, its almost always being used. (Just saying use warnings; brings it into existance. As for File::Find itself it isnt that large either. So I think your overkill argument is bogus. Especially when it allows you to write:
use File::Find;
my @files;
find {no_chdir =>1, wanted => sub{-d and return; push @files,$_ } },@r
+oots;
Putting the files into the desired data structure is left as an excercise for you. I might just say that I suspect File::Spec will come in useful.
The point here is that in trying to avoid overkill that doesn't exist you have wasted a bunch of time trying to partially reinvent a wheel. You seem to be more concerned with the startup time of your script (which is mostly determined by the OS and physical resource bottlenecks) and not the time it takes you to write the script. I bet you would have to run your program literally thousands of times before you will make up the lost time that you personally have suffered.
This comes down to premature optimisation. You are trying to a make a process faster when you have absolutely no evidence that that process is suffering performance problems. (You cant have this evidence as you havent written the script yet have you?) So you have comitted two golden gaffs, the first is to optimise prematurely, and the second is to badly reinvent a wheel to do so. This is hardly an efficient use of your time.
---
demerphq
<Elian> And I do take a kind of perverse pleasure in having an OO assembly language...
| [reply] [d/l] [select] |
|
|
|
Re: Directory Recursion Disorder
by PodMaster (Abbot) on Jun 23, 2003 at 07:32 UTC
|
| [reply] [d/l] |
Re: Directory Recursion Disorder
by Beatnik (Parson) on Jun 23, 2003 at 09:17 UTC
|
| [reply] |
Re: Directory Recursion Disorder
by tachyon (Chancellor) on Jun 23, 2003 at 10:14 UTC
|
my $root = 'c:';
my @dirs = grep { -d } glob( "$root/*" );
print "$_\n" for @dirs;
cheers
tachyon
s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print
| [reply] [d/l] |
Re: Directory Recursion Disorder
by crouchingpenguin (Priest) on Jun 23, 2003 at 11:30 UTC
|
| [reply] |
Re: Directory Recursion Disorder
by BrowserUk (Patriarch) on Jun 23, 2003 at 16:10 UTC
|
Given that your directory structure is only 2 levels deep, there is no need for recursion. You can build your Hash of Arrays in one line.
my %dirs = map{
$_ => [ map{ m[.*/(.*?)$] } glob "$_/*" ]
} grep{ -d } glob "/basedir/*";
The main advantage is that you build the datastructure you want directly, but I'm also not much of a fan of the Find::File* stuff.
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
| [reply] [d/l] |
|
use glob to expand filenames in /basedir
use grep to grab all directory names
for each directory name grab its subdirs
pass them onto map to where there matched to the regex
'.*/(.*?)$'
assign the (array) result from the map to the corresponding %dir key
I think I've got the basic idea of it, could someone explain how the subdir keys are assigned in the hash in a bit more detail? Thanks :)
| [reply] [d/l] |