Hi Monks,
Module DBD::Oracle has serious performance degradation at inserting rows. Except execute_array (or bind_param_array), all other insert solutions are pretty slow (eg. bind by name). DBD::Oracle use otlv4 (http://otl.sourceforge.net/otl3_intro.htm) as wrapper over Oracle's OCI interface and the basic http://otl.sourceforge.net/otl3_ex10.htm is much (~10 times) faster than DBD::Oracle row based insertion. DBD::Oracle implements DATA_AT_EXEC mode with OCIBindDynamic for row based insertion (not array) as can be seen below and this block is repeated for each value.
...
dbd_bind_ph(1): bind :id <== '1774684594' (type 0 (DEFAULT (varchar)))
dbd_rebind_ph() (1): rebinding :id as '17746...' (not-utf8, ftype 1 (V
+ARCHAR), csid 0, csform 0(0), inout 0)
dbd_rebind_ph_char() (1): bind :id <== '17746...' (size 10/12/0, ptype
+ 3(VARCHAR), otype 1 )
Changing maxlen to 12
dbd_rebind_ph_char() (2): bind :id <== '17746...' (size 10/12, otype 1
+(VARCHAR), indp 0, at_exec 1)
bind :id as ftype 1 (VARCHAR)
OCIBindByName(163ad08,1675e58,1614d20,":id",placeh_len=3,value_p=1
+675a00,value_sz=12,dty=1,indp=1675e78,alenp=0,rcodep=1675e70,maxarr_l
+en=0,curelep=0 (*=0),mode=DATA_AT_EXEC,2)=SUCCESS
OCIBindDynamic(163a970,1614d20,1675e20,7fdf910dacc0,1675e20,7fdf91
+0daf30)=SUCCESS
OCIAttrGet(163a970,OCI_HTYPE_BIND,1675e3c,0,31,1614d20)=SUCCESS
dbd_rebind_ph(): bind :id <== '17746...' (in, not-utf8, csid 873->0->8
+73, ftype 1 (VARCHAR), csform 0(0)->0(0), maxlen 12, maxdata_size 0)
OCIAttrSet(163a970,OCI_HTYPE_BIND, 7ffda0f7f960,0,Attr=31,1614d20)
+=SUCCESS
...
Is this design intentional? OCI documentation said that
OCIBindDynamic can be useful at working with big data items
https://web.stanford.edu/dept/itss/docs/oracle/10gR2/appdev.102/b14250/oci05bnd.htm#i427753. C++ code using otlv4 can process blob/clob data and it does not use
OCIBindDynamic at all. So, why DBD::Oracle implemented this way?
Thanks in advance
Update: details added Re^2: DBD::Oracle insert row performance