How much speed are you willing to sacrifice for a new feature?
As much as it takes, ’cause I need the damn feature! ☺
IMHO, a modern CSV parser must be able to parse Unicode text encoded in any Unicode character encoding scheme and with any arbitrary Unicode characters (code points, or even <gulp> extended grapheme clusters) used for CSV metacharacters. And it must properly handle the Unicode byte order mark as prescribed by the Unicode Standard.
I'm the monk responsible for these related posts and threads:
In the case of the Concordance DAT file, the sep_char separator character, U+0014, is encoded in one byte in UTF-8: "\x14". It's the quote_char quote character (and consequently also the escape_char quote escape character), U+00FE, that happens to be encoded in two bytes in UTF-8: "\xC3\xBE".