http://qs321.pair.com?node_id=1047920

ghosh123 has asked for the wisdom of the Perl Monks concerning the following question:

Hi
I have a gui-based tool written in perl tk which can run jobs in thousands. But we are trying to scale up the gui so that it can run nearly 1lac job without getting hung.
The code base is very huge and comprised of near about 60-70 module files. It uses socket connection for inter-process communication and MySQL for storing data. I need to profile this huge perl code base to know what could be the bottlenecks for running lacs of jobs and how can I overcome that ?
Can anybody please suggest me any good mechanism to know the bottlenecks and do profiling. I have come to know about Devel::NYFTProf but not quite able to understand how to use it. The gui has a launching script which in turn calls some more scripts using some modules.
I have come to know of following things but not quite sure how can they be helpful and how to find out his problem in my huge code

1.Avoid->repeated->chains->of->accssors(..) . Instead use temprorary variables.
Question is how come it will help if I avoid repeated chains of function call and use temp variable. Also how can I look in my huge code where all such chain calls are happening?

2. Use faster accessors as
Class::Accessor
-> Class::Accessor::Fast
---> Class::Accessor::Faster
----->Class::XSAccessor

3. Avoid calling subs that don't do anything. How can I detect this ? Any mechanism ?

4. Exit subs and loops early , delay initialization

return if not ... a cheap test...; return if not ... a more expensive test..; my $foo = ..initialization...; ...body of sub routine ...
5. Fixing silly code as below :
return exists $hash{$a}{$key}?$hash{$a}{$key} : undef; return $hash{$a}{$key}; # instead of above

Thanks