http://qs321.pair.com?node_id=1178227

Greetings all,

I have a strange situation, wondering if there's a DBIx::Class-ish option that might help me. We've got a very large, old, spaghetti, denormalized Oracle database and a sea of CGIs with embedded, hardcoded SQL. Ultimately, I'd like to refactor the schema, normalize it, add FK constraints, and all the usual good things. But I can't without considerable destabilization risk to the CGIs.

The strategy at this point is to carve out chunks of CGI functionality and refactor each chunk into an isolated web service with its own data store. But in doing this, we'll need to maintain two data stores for a while: the existing legacy denormalized store and the new and smaller/better normalized store. We need to keep these in sync for a while until we have refactored enough to deprecate the old schema. So what we need then is a "translation layer" behind each new web service that sees changes to the local data store and pushes those changes to the legacy database (until we can eventually deprecate the legacy db).

I'm envisioning this "translation layer" would need to understand both schemas (the new local store and the legacy Oracle), and would need to understand how to map data between the two. A lot of this will be heavily manual at the detail level, I understand. But I'm wondering if anyone knows of a scaffolding/framework option I should consider before starting something custom from scratch.

The intent is to use DBIx::Class::Schema::Loader to auto-build the Perl schema classes from the legacy database schema, and we'll probably use it also for the new schema. But once we have these two sets of schema classes, I'd like a way to link them. Certainly, the details of this linking will have to be very custom.

Thanks.

  • Comment on Perl denormalized-to-normalized schema translation, maybe with DBIx::Class (maybe)

Replies are listed 'Best First'.
Re: Perl denormalized-to-normalized schema translation, maybe with DBIx::Class (maybe)
by ysth (Canon) on Dec 20, 2016 at 20:36 UTC

    The two data stores thing sounds like a disaster in the making. I think you are trying to bite off too much at once.

    First carve out the CGI functionality into web services. For each web service, if you can convert all the CGIs that use that data at once, make the web service use a new data store. If you will convert the CGIs piecemeal, have that web service use the old data store, and switch to a new data store when the transition to that web service is complete.

    --
    A math joke: r = | |csc(θ)|+|sec(θ)| |-| |csc(θ)|-|sec(θ)| |

      Greetings ysth,

      I'm sure you're right, that this plan is a disaster in the making. Unfortunately, I don't know what other strategy I could use to fully refactor things iteratively over time. Everything in this galactic codebase uses the centralized database, with hacks upon hacks that are all dependent on the existing schema. I can't refactor even a small part of the schema without refactoring nearly all the code, which would be a monumental process.

      I could use the same denormalized data store for each new web service, but then each new web service would still be tied to the old data store. And then after I had finished refactoring all the old code, I'd still have to have one big project to refactor the data.

      UPDATE: An earlier thought I had was to use a 3-step process instead of a 2-step process.

      1. Write a "database abstraction layer" that sits in front of the legacy database but initially does very little apart from exposing a simple API (even as silly as SQL-in/JSON-out, even)
      2. Refactor blocks of CGIs into services, with each service calling built-as-needed methods exposed from the data abstraction layer
      3. Finally, iteratively refactor the database schema, which would be safer at this point because I'd have a set of services that all had reasonable tests on them

      It's just an idea. Not sure if it's the right approach, though.

        That sounds pretty reasonable (to me anyhow).

        Would it make more sense in step 1 to have a query-name/parameters-in/JSON-out? For testing, ensuring that the API lets you to query both the original database schema AND the refactored schema at the same time should useful.

Re: Perl denormalized-to-normalized schema translation, maybe with DBIx::Class (maybe)
by gsiems (Deacon) on Dec 20, 2016 at 21:20 UTC
    Just out of curiosity, if the legacy store is Oracle, what's the "new local store"?

      Greetings gsiems,

      Could be Oracle; could be some other RDBMS; could be some NoSQL option; could be just a key/value cache in some rare cases. I'd like to have the freedom to be able to select the best store for the needs of the specific service.

        Oh. O.k. I was just wondering because, if the "new" was defined, then maybe there would/could be some DB to DB options that might help in dealing with the problem.