Thanks for the reply Michael. I prefer DBI/DBD::Sybase as well. But, don't know the reason as to why. Could you please help to clarify why you advice using DBI over CTlib?
The reason I prefer the DBI/DBD route is that you use the same interface for *any* database. Here at $work, I work with Oracle, MSSQL, SQLite and occasionally PostgreSQL. When I have to interact with a particular database, I don't have to ask myself questions like:
- OK, how do I read a result set with *this* database?
- Can I use placeholders in my query? If so, how do I do that?
- What data structure get I get my results in?
Since DBI provides a standardized interface, I can be immediately productive when I switch back to a database I use rarely, without having to reacquaint myself with a module I haven't used in a year.
Sure, there are some occasional differences between the databases, but DBI / DBD lets me ignore most of them. Occasionally, I'll need a special database-specific feature and have to read DBD::Oracle or some such. But better that than having to read documentation on all the everyday operations for selecting, inserting, updating and deleting.
That's my 1/50 of a dollar.
When your only tool is a hammer, all problems look like your thumb.
Thank you for the detailed reply roboticus. It is helpful.
Thanks Michael. My team is working on migrating our scripts from DBlib (to either CTlib or DBD::Sybase). However, we are still discussing the pros and cons of CTlib vs DBD::Sybase. One of the things we noticed was the bcp in CTlib is much much faster as compared to the "Experimental Utility" in DBD::Sybase. Can you please help to clarify whether the "Experimental Bulk Load" utility in DBD::Sybase is doing inserts underlyingly and therefore slower?