Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re: Perl denormalized-to-normalized schema translation, maybe with DBIx::Class (maybe)

by ysth (Canon)
on Dec 20, 2016 at 20:36 UTC ( [id://1178246]=note: print w/replies, xml ) Need Help??


in reply to Perl denormalized-to-normalized schema translation, maybe with DBIx::Class (maybe)

The two data stores thing sounds like a disaster in the making. I think you are trying to bite off too much at once.

First carve out the CGI functionality into web services. For each web service, if you can convert all the CGIs that use that data at once, make the web service use a new data store. If you will convert the CGIs piecemeal, have that web service use the old data store, and switch to a new data store when the transition to that web service is complete.

--
A math joke: r = | |csc(θ)|+|sec(θ)| |-| |csc(θ)|-|sec(θ)| |
  • Comment on Re: Perl denormalized-to-normalized schema translation, maybe with DBIx::Class (maybe)

Replies are listed 'Best First'.
Re^2: Perl denormalized-to-normalized schema translation, maybe with DBIx::Class (maybe)
by gryphon (Abbot) on Dec 20, 2016 at 21:04 UTC

    Greetings ysth,

    I'm sure you're right, that this plan is a disaster in the making. Unfortunately, I don't know what other strategy I could use to fully refactor things iteratively over time. Everything in this galactic codebase uses the centralized database, with hacks upon hacks that are all dependent on the existing schema. I can't refactor even a small part of the schema without refactoring nearly all the code, which would be a monumental process.

    I could use the same denormalized data store for each new web service, but then each new web service would still be tied to the old data store. And then after I had finished refactoring all the old code, I'd still have to have one big project to refactor the data.

    UPDATE: An earlier thought I had was to use a 3-step process instead of a 2-step process.

    1. Write a "database abstraction layer" that sits in front of the legacy database but initially does very little apart from exposing a simple API (even as silly as SQL-in/JSON-out, even)
    2. Refactor blocks of CGIs into services, with each service calling built-as-needed methods exposed from the data abstraction layer
    3. Finally, iteratively refactor the database schema, which would be safer at this point because I'd have a set of services that all had reasonable tests on them

    It's just an idea. Not sure if it's the right approach, though.

      That sounds pretty reasonable (to me anyhow).

      Would it make more sense in step 1 to have a query-name/parameters-in/JSON-out? For testing, ensuring that the API lets you to query both the original database schema AND the refactored schema at the same time should useful.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1178246]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (5)
As of 2024-04-19 10:15 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found