Fellow monks,
I want to write a simple emebeded perl processor. No problem here.
I'll write a pattern, separate perl code from the rest and use an eval() call on it.
And now the problem: the file I evaluate is encoded in unicode.
Eighter utf8 or utf16.
How do I evaluate UTF16 perl source? In 'normal' case
"print 'foo'" will be encoded as "\0p\0r\0i\0n\0t\0 \0'\0f\0o\0o\0'"
and that wont eval because every "\0" character will effectively end a string.
Another problem is how to run a pattern on a utf8/16 string.
Recoding the source to any 8-bit charset prior to evaling will not work.
Some national characters could be lost during this conversion.