http://qs321.pair.com?node_id=1071851


in reply to Re^5: Executing functions from another process
in thread Executing functions from another process

That's exactly the point I was glossing over when I recommended a dispatch table at the receiving end, and a JSON producer at the sending end..

The OP doesn't have to use JSON, but it's convenient, and well understood. The point is that the OP cannot simply pass references around, he has to pass plain old data. So a JSON packet that looks something like this:

'{"action":"tupitar","params":["this","that","other"]}'

...would be easy to interpret at the receiving end. And the dispatch table avoids the mess of going directly to symbolic subrefs.

Things get ugly if he's got to send huge amounts of data to the subroutines. But even there, an additional layer of indirection can help; instead of sending data as a parameter, send a link to the data; a filename, a row ID from a database, or whatever. But sending a Perl reference is an indirection that will lose its magic as it gets passed around. ...just like you wouldn't pass a C pointer to local data from one process to another.


Dave

Replies are listed 'Best First'.
Re^7: Executing functions from another process
by gri6507 (Deacon) on Jan 24, 2014 at 15:09 UTC
    The subroutines that I am trying to call from one process to the other do indeed pass a bunch of data, some of which is by reference. I understand that this can't be done due to the memory space separation of the two processes. I also understand that I would need to
    1. serialize the parameters on the caller side, flattening out any references
    2. deserialize the parameters on the receiving side into the same prototype
    3. actual function executes with the deserialized parameters, potentially modifying the by reference parameters
    4. then serialize the modified parameters again to capture the modified values
    5. deserialize the parameters on the original caller side to update the parameter values
    I've put together a very simple example of the first 3 steps where I use Data::Dump for my (de)serialization. However, I am running into an issue.
    use warnings; use strict; use Data::Dump qw(dump); my $a = "one"; my $b = "two"; my $c = \$b; my @d = ($a, $b, $c); my $str = dump(@d); # ("one", "two", \"two") my @newD = eval $str; $newD[0] = "A"; $newD[1] = "B"; ${$newD[2]} = "C";
    The problem happens on the last line which errors out with Modification of a read-only value. Why is this the case?
      The problem happens on the last line which errors out with Modification of a read-only value. Why is this the case?

      Because the combination of the dump + eval equates to:

      my @newD = ("one", "two", \"two");

      Which means that $newD[ 2 ] is assigned a reference to the string literal "two"; and when you indirect through that reference, you are trying to modify that string literal.

      With the other two elements of the array, new variables are created which are initialised to read-write copies of the string literals.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        So it sounds like dump+eval does not create a true copy of the data structure. What alternate methods are there to create a true clone of a structure so that even the referenced values could be writable?