http://qs321.pair.com?node_id=506735


in reply to Re^2: What is YOUR Development Process?
in thread What is YOUR Development Process?

If I understand you correctly, you're trying to solve a very difficult problem that you don't need to solve.

The problem that you're trying to solve is how to manage a situation where you have many different components which have cross dependencies and are released on independent schedules. That's a very hard problem, not the least because each component needs to know about what is happening with every other component that it might care about.

But you don't need to solve that. Put everything in version control and have one release cycle where you release everything. Every time. Now all of your version dependency problems go away. Rather than needing to know all of the combinations that might work together, you need only know that this combination works together. If you set things up carefully, the entire application can live in one source tree, allowing you to have multiple copies on one machine that do not interact with each other. (Configuration modules that set the right path are a good thing.)

Most people don't do this with modules that aren't under your control, such as a DBD. The solution there is to rely on modularity. Make sure that every production machine has the same versions of everything. If you want to upgrade a key module, make sure that the old and the new work the same as far as your application goes (regression tests are a good thing here), then switch only that module, everywhere. If the part of the API that you're relying on hasn't changed (generally this is true, though you need to test this assumption), then it doesn't matter at the point of rollout whether you're using the old or the new.

The motto here is "work in reversible and independent changes". That way each change can be tested and rolled out. If anything breaks then you know what it was and can easily roll it back.

But if you want to really be paranoid, what you can do is have a special subdirectory for external code. And then everything can be in there. Personally I don't like doing that though, since I've had worse experiences with binary incompatibility between machines than with moderately careful system administration as described above. (For instance I've been left with no incremental upgrade path between using different versions of Linux - which is something that I can't put into source control.)

  • Comment on Re^3: What is YOUR Development Process?

Replies are listed 'Best First'.
Re^4: What is YOUR Development Process?
by swiftone (Curate) on Nov 08, 2005 at 15:45 UTC
    you're trying to solve a very difficult problem

    Possibly. Right now my process is bad, so I'm looking at what others are doing to see what is better. I'd LIKE to resolve my dependency issue between components, but that's not my orverall goal: I just want a better, more reliable process.

    Put everything in version control and have one release cycle where you release everything.

    Help me out with some details here.

    Let's say I write a CRUD application. I can put the app modules, the backend CDBI modules, the templates, and the instance script into version control, and tag them all with "app Foo". I can then check them out on the production server. (In truth, I'll be looking up how to tag in subversion first :) )

    Later, someone (possibly me, possibly not) handles a request for another CRUD app, with a couple of new features. That means tweaking the original app module, and writing a new instance script, and creating one or two new templates, but using a bunch of the original templates (actually, we use a inheriting tree hierarchy, so any templates not replaced are inherited, but not copied. (if /path/to/this/app/templates/Foo.tmpl doesn't exist, it will look in /path/to/this/app, and /path/to/this, and so forth))

    How do we mark that? The original app should now use the newer version of the app module. On the production machine, only one copy of the app module will exist. All the templates will exist, but the new app should use any templates from the other app that aren't overridden, so I can't just snapshot at this moment.

    Also, how do we ensure that the next guy to come along can check out HIS upgrade on the production server without running into problems that the files _I_ checked out are owned by me?

    Make sure that every production machine has the same versions of everything.

    How do you do that? Previously we've tried running a local mini-CPAN, and loading all servers off of that, currently my sysadmin is packing all perl modules into rpms(ick) that exist on our local rpm repository that all our servesr upgrade from. I've seen someone else here recommend mounting /usr/local/ so that everything IS the same on all the servers. This is definitely a solvable problem, I'm just curious how YOU do it.

    The motto here is "work in reversible and independent changes". That way each change can be tested and rolled out. If anything breaks then you know what it was and can easily roll it back.

    I like the motto, I'm still trying to figure out HOW to do it. When one module is used in multiple projects, I haven't figured out how to properly keep it tested for all and in sync. If I tag a particular version, then installing/upgrading apps in the wrong order will break previously installed material.

    If I bundle all of my modules in CPAN-like bundles, it can check version requirements for me and fix that part of the issue, but that only covers modules, not templates.

      I'm puzzled by something in your description.

      You seem to be thinking in terms of many independent applications that exist seperately and are started by huge copy and pastes. If that's accurate, then that's a terrible way to operate. I think in terms of one application that has many parts to it, many of which share components. So the second developer just checks out the application and starts adding on where needed.

      If their development will take a while, then they should branch and then merge that branch back into HEAD when the project is finished.

      Another red flag is that you're talking about having each developer check their stuff out on production. And then wonder who owns the files. That's a big red flag.

      Instead have a regular release process to production (we aim for weekly) and have the actual installation be done using specific production user accounts. Developers don't even have personal logins on production machines. That problem is gone, and several with it.

      Incidentally the release process should be scripted and automated. Both pushing to QA, and then pushing from QA to production, should be a matter of pushing a button and watching it work. That way you guarantee that important steps (eg tagging the release and running your test suite) happen. A note on the production release. A good strategy is to take half your machines out of the load balancer, install there, tell the load balancer to switch which machines are online, install on the rest, then bring the rest back online. That way at all points all webservers are consistent.

      About production machines. Let me just say that scripting is a good thing. Script how to install version X of Foo on machine Y. Then do that on every production machine. And make that part of your install process.

      Personally I don't like having /usr/local/ on a shared mount, because that mount can become a single point of failure. Plus see the binary incompatibility issue that I had before - you're now forced to do "big bang" OS upgrades, all machines at once. However to whatever extent possible, you want to make your machines cookie cutter copies. The details are a matter of system administration.

        You seem to be thinking in terms of many independent applications that exist seperately and are started by huge copy and pastes

        Not at all. I am, however, defining an "application" as being one customer demand, which may be different than how you are thinking of it. So if I have a CRUD app, this will be:

        • 1 CGI App module
        • some number of backend modules
        • some number of templates
        • 1 instance script
        When someone asks for a second CRUD "app", I really just create a new instance, but I might edit the App module to allow for new features. So:
        • The original App Module is edited (not a copy, just updated)
        • some new templates are added. Any templates that aren't overridden are inherited.
        • a new instance script
        So are you saying that to create this second app, the developer checks out the ORIGINAL app, and adds this second instance to it? That would solve all the updating issues, bu where do you draw the line? Is there a line? The developer is checking out ALL code? If I put the instance scripts and templates in the same tag as the app module, I can't install the app module on another machine without installing the instance, which is not desired.

        Okay, we've looked at that model. The problem was trying to keep development from clogging up production. Quite often we'll start on development and get pulled off onto other projects, leaving code incomplete, sometimes returned to, sometimes never returned to. How do we merge the development and production trees cleanly? Won't the development tree get all sorts of hanging code? (which is a current problem for us)

        (eg tagging the release and running your test suite)

        Here I'm missing details again. If we're checking out the entire codebase, are we running every bit of test code we have? How do we do the install process so that only the changed bits of code are tested on the new machine?

        Believe me, I think we're talking about the same desires, I've just failed to get a practical, working system every time I've tried, and I think it's because I'm doing too much guessing on how to implement these things.

        We've come up with systems that would work great in 15 person shops, but in my 1-3 person shop, with a constant backlog of tasks, rapidly shifting priorities, a windows-based designer (vs the Linux-based coders and servers), I've been unable to implement a setup that actually works. (Anyone with advice on what to do with the Windows designer who edits our templates would be welcome, since he can't test his edits on his machine.)

        Script how to install version X of Foo on machine Y.

        This makes it sound as if you AREN'T checking out the entire code base, so I'm confused again. An "app" could be one codebase, but we have modules that inherit from others, and template sets that inherit from others, so I'm not seeing where to draw the line between apps. And of course, we have support modules, with their own dependancies.

        part of your install process

        What is your install process? Since you're talking about running tests and install process, it sounds like it's a bit more than a check out. What do you do?

        A good strategy is to take half your machines out of the load balancer, install there, tell the load balancer to switch which machines are online, install on the rest, then bring the rest back online. That way at all points all webservers are consistent.

        I only wish I had the budget for load balancing. I've been trying to get a test server for years. We have a number of servers for different audiences for security reasons, and a generally common codebase between them.

        Really, I'm not disagreeing with good practices, I'm just trying to figure out how to IMPLEMENT them.