Beefy Boxes and Bandwidth Generously Provided by pair Networks
Come for the quick hacks, stay for the epiphanies.
 
PerlMonks  

Re^2: Perl cgi without mod_perl, your experience

by tachyon (Chancellor)
on Jun 22, 2004 at 14:18 UTC ( [id://368730]=note: print w/replies, xml ) Need Help??


in reply to Re: Perl cgi without mod_perl, your experience
in thread Perl cgi without mod_perl, your experience

While your point it technically valid it is statistically invalid. With the vast majority of interactive websites handled via CGI, mod_perl or something similar is the solution. To be technically correct one would say that you get benfits whenever the startup time (forking an interpreter, connecting to a DB) forms a significant portion of the total runtime. There are relatively few exceptions to this. Downloads and other streams plus long running processing are among those exceptions. It is not a case of *some*, it is a case of *mostly*

BTW, extra hardware also means more reliability.

Rubbish. Extra hardware actually increases the chances of a failure. Think about it..... If the mean time to failure is 700 days and you have 700 servers you will on average have one fall over every day. Extra hardware only provides uptime/reliability protection if you use that hardware to create redundant nodes with automatic failover and to be frank I don't think we are talking that level. If you use efficient code (mod_perl) included you may be able to *afford* that kind of infrastructure as boxes that would otherwise be working inefficiently can be made to do more work*, freeing resources for redundancy. But even the simplest high availability system really needs 4 nodes - a pair out front to create your redundant load balancer and a pair behind to do the work/provide failover. Of course there a lots of other ways to skin that cat depending on how much downtime you can tolerate.

* Of course caning the hell out of your hardware does not help longevity ;-)

cheers

tachyon

  • Comment on Re^2: Perl cgi without mod_perl, your experience

Replies are listed 'Best First'.
Re: Perl cgi without mod_perl, your experience
by Abigail-II (Bishop) on Jun 22, 2004 at 14:49 UTC
    Extra hardware actually increases the chances of a failure.
    Yes, but that's not of mosts peoples interest. It's like saying "I don't do backups, because that could mean that either my hard disk or my tape contains bad spots". While it may increase a failure, it reduces the chance of a critical failure, where a criticial failure means the service you are providing is no longer available (or only available at unacceptable performances).
    If the mean time to failure is 700 days and you have 700 servers you will on average have one fall over every day.
    If the mean time between failure is 700 days, and you have one server, you will be down once every 700 days. If you have 700 servers, you will be down every
    37036335534589881919519745177905091061529367089546822435775456657617\ 43636878121352291779253462053983059009668861547217195682739117850118\ 35008240379192887792604500837043507056449661590126378834827343300415\ 51155924340365412561936621885141113576008432906355745321587893612547\ 92657179813327520180208828937231810950060232310658708592626955683634\ 89377559706408723518059008437790717245520601634447063767955926579796\ 52663793731051027728096621773894169469654930678654263045798895238772\ 34666615299867665848656245124536507750920588975484100300349256862746\ 40081407312113263209011491753853770009409642000100000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000000000000000000000000000000000000000000000000000\ 00000000000000000000
    days. Or if your mean time between failure is 700 days, and it takes a day to recover from a failure, with only two servers you will be down once every 1342 years. Redundant servers work in parallel, and not in a serial configuration.

    But even the simplest high availability system really needs 4 nodes - a pair out front to create your redundant load balancer and a pair behind to do the work/provide failover.
    High availability systems don't need load balancers. It's a high availability system - not a load balancing system. All the high availability systems I've worked with, HP's ServiceGuard, Veritas Cluster, SUN Cluster work fine with 2 nodes.
    Of course there a lots of other ways to skin that cat depending on how much downtime you can tolerate.
    Oh, yeah, but if you can tolerate downtime, you may be able to tolerate slower service. ;-)
      This is only true if you assume that your machine magically repair themselves after they fail. In reality, it might take a day or a week or a month to repair depending on the type of failure. I've been in the position of having noticable degradation of service because our CPUs were failing faster than Sun could provide replacements, so dead machines were piling up.
        If it takes a long time to repair after a failure, it's all the more reason to have more hardware. I know few companies who can afford an important service to be unavailable for weeks or months.

        Abigail

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://368730]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others browsing the Monastery: (11)
As of 2024-04-23 21:57 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found