in reply to Re: What is YOUR Development Process?
in thread What is YOUR Development Process?

Put them in a version control system.

The question was how to handle the inter-dependence of these items, not how to version them.

What are non-application modules?

CGI::Application uses a model where you take the normal if/else ladder analyzing the input and create a module with different methods that get called for different input. You create an "instance" script that sets any paramters and calls this module. Non-application modules would be modules that are used by the application module, but don't assume I/O via the web (and thus would include My::Module as well as DBI). There's nothing magical about it: It's just a standardization of what people did without a framework.

Describe a non-multi-user-environment

My personal machine, where a check out can be owned by me with no problems, vs the shared machines.

you typically tag all of the files you want for a release and then checkout everything with that tag.

If application Foo depends on module Baz, and Application Bar depends on a newer version, how do you mark those dependencies? What if the required module is not of your authorship, such as a DBD, and thus not in the version control? Do you tag every file in the dependency chain? That could get very long and puts extra effort on the tagging process.

No offense, but your answers are exactly the kind of material I've been seeing. "use version control" without telling me how to synchronize files of different types. "Describe a non-multi-user-environment" when the apparent majority of perl developers are in shops of only one or two people, and very much effectively not multi-user development/ "rsync the files to your production machines" when I'm worried about making sure all the proper files are copied and tests pass. Your answers may be correct, but they don't actually tell me what I need to be able to implement a real system.
  • Comment on Re^2: What is YOUR Development Process?

Replies are listed 'Best First'.
Re^3: What is YOUR Development Process?
by tilly (Archbishop) on Nov 08, 2005 at 13:29 UTC
    If I understand you correctly, you're trying to solve a very difficult problem that you don't need to solve.

    The problem that you're trying to solve is how to manage a situation where you have many different components which have cross dependencies and are released on independent schedules. That's a very hard problem, not the least because each component needs to know about what is happening with every other component that it might care about.

    But you don't need to solve that. Put everything in version control and have one release cycle where you release everything. Every time. Now all of your version dependency problems go away. Rather than needing to know all of the combinations that might work together, you need only know that this combination works together. If you set things up carefully, the entire application can live in one source tree, allowing you to have multiple copies on one machine that do not interact with each other. (Configuration modules that set the right path are a good thing.)

    Most people don't do this with modules that aren't under your control, such as a DBD. The solution there is to rely on modularity. Make sure that every production machine has the same versions of everything. If you want to upgrade a key module, make sure that the old and the new work the same as far as your application goes (regression tests are a good thing here), then switch only that module, everywhere. If the part of the API that you're relying on hasn't changed (generally this is true, though you need to test this assumption), then it doesn't matter at the point of rollout whether you're using the old or the new.

    The motto here is "work in reversible and independent changes". That way each change can be tested and rolled out. If anything breaks then you know what it was and can easily roll it back.

    But if you want to really be paranoid, what you can do is have a special subdirectory for external code. And then everything can be in there. Personally I don't like doing that though, since I've had worse experiences with binary incompatibility between machines than with moderately careful system administration as described above. (For instance I've been left with no incremental upgrade path between using different versions of Linux - which is something that I can't put into source control.)

      you're trying to solve a very difficult problem

      Possibly. Right now my process is bad, so I'm looking at what others are doing to see what is better. I'd LIKE to resolve my dependency issue between components, but that's not my orverall goal: I just want a better, more reliable process.

      Put everything in version control and have one release cycle where you release everything.

      Help me out with some details here.

      Let's say I write a CRUD application. I can put the app modules, the backend CDBI modules, the templates, and the instance script into version control, and tag them all with "app Foo". I can then check them out on the production server. (In truth, I'll be looking up how to tag in subversion first :) )

      Later, someone (possibly me, possibly not) handles a request for another CRUD app, with a couple of new features. That means tweaking the original app module, and writing a new instance script, and creating one or two new templates, but using a bunch of the original templates (actually, we use a inheriting tree hierarchy, so any templates not replaced are inherited, but not copied. (if /path/to/this/app/templates/Foo.tmpl doesn't exist, it will look in /path/to/this/app, and /path/to/this, and so forth))

      How do we mark that? The original app should now use the newer version of the app module. On the production machine, only one copy of the app module will exist. All the templates will exist, but the new app should use any templates from the other app that aren't overridden, so I can't just snapshot at this moment.

      Also, how do we ensure that the next guy to come along can check out HIS upgrade on the production server without running into problems that the files _I_ checked out are owned by me?

      Make sure that every production machine has the same versions of everything.

      How do you do that? Previously we've tried running a local mini-CPAN, and loading all servers off of that, currently my sysadmin is packing all perl modules into rpms(ick) that exist on our local rpm repository that all our servesr upgrade from. I've seen someone else here recommend mounting /usr/local/ so that everything IS the same on all the servers. This is definitely a solvable problem, I'm just curious how YOU do it.

      The motto here is "work in reversible and independent changes". That way each change can be tested and rolled out. If anything breaks then you know what it was and can easily roll it back.

      I like the motto, I'm still trying to figure out HOW to do it. When one module is used in multiple projects, I haven't figured out how to properly keep it tested for all and in sync. If I tag a particular version, then installing/upgrading apps in the wrong order will break previously installed material.

      If I bundle all of my modules in CPAN-like bundles, it can check version requirements for me and fix that part of the issue, but that only covers modules, not templates.

        I'm puzzled by something in your description.

        You seem to be thinking in terms of many independent applications that exist seperately and are started by huge copy and pastes. If that's accurate, then that's a terrible way to operate. I think in terms of one application that has many parts to it, many of which share components. So the second developer just checks out the application and starts adding on where needed.

        If their development will take a while, then they should branch and then merge that branch back into HEAD when the project is finished.

        Another red flag is that you're talking about having each developer check their stuff out on production. And then wonder who owns the files. That's a big red flag.

        Instead have a regular release process to production (we aim for weekly) and have the actual installation be done using specific production user accounts. Developers don't even have personal logins on production machines. That problem is gone, and several with it.

        Incidentally the release process should be scripted and automated. Both pushing to QA, and then pushing from QA to production, should be a matter of pushing a button and watching it work. That way you guarantee that important steps (eg tagging the release and running your test suite) happen. A note on the production release. A good strategy is to take half your machines out of the load balancer, install there, tell the load balancer to switch which machines are online, install on the rest, then bring the rest back online. That way at all points all webservers are consistent.

        About production machines. Let me just say that scripting is a good thing. Script how to install version X of Foo on machine Y. Then do that on every production machine. And make that part of your install process.

        Personally I don't like having /usr/local/ on a shared mount, because that mount can become a single point of failure. Plus see the binary incompatibility issue that I had before - you're now forced to do "big bang" OS upgrades, all machines at once. However to whatever extent possible, you want to make your machines cookie cutter copies. The details are a matter of system administration.