http://qs321.pair.com?node_id=11148361


in reply to getting rid of UTF-8

I have a file that *should* be all ISO-latin, but the program that created it seems to sprinkle UTF-8 characters round in it.

If that's the case, then that program is horribly broken and I would recommend trying to see what you can do to fix that. Anyway, do you have any sample data that shows both UTF-8 and Latin-1 data?

EF BB BF is the Byte order mark encoded as UTF-8, it can be present at the beginning of UTF-8 encoded files. The fact that in your second sample it appears after a series of commas could mean that the program is trying to write a CSV file and used a UTF-8 encode function that adds the BOM on individual fields, or it slurped a file with the wrong encoding and used that as the contents of the field. If this guess is correct, then maybe a solution would be to first parse the CSV file and then individually decode the fields with different encodings, though I would consider that a pretty ugly workaround, plus you'd have to know the encodings (or guess them, which is a workaround in itself).

Can you tell us more about this program, the file format it is outputting, and give more example data that shows the problem?

Update just to address the question in the title and node: simply clobbering any non-ASCII characters without understanding the input data is almost never the right solution, because you'll almost certainly also delete important characters. Instead, first fix the encoding problems, and if you then really want to ASCIIfy your (correctly decoded!) Unicode data, you can use e.g. Text::Unidecode.

Replies are listed 'Best First'.
Re^2: getting rid of UTF-8
by BernieC (Pilgrim) on Nov 24, 2022 at 22:34 UTC
    The problem is out-of-support and I've used it for years. And it works perfectly... except.. as you deduced it puts that stuff in when it is exporting to a CSV file. I don't know how to upload the broken data. When I open it in my text editor, it has no problem with it but when I go and save the file the UTF-8 is all still there. I loaded it into Excel, it loaded fine and showed no anomalies, but when I saved it from excel all the UTF-8 stuff was still there. There's no pattern I can tell for why there are the byte-order-markers strewn through the file.

    What should I do either to upload something to here with example probelmatic stuff and/or be able just to brute-force fix it?

      The issue with the sample data you posted is that it is entirely ASCII with some some BOMs in it, but from your description it sounded like you could have other Latin-1 (or CP1252 or Latin-9) or UTF-8 characters in it, which you don't show.

      What should I do either to upload something to here with example probelmatic stuff

      A hex dump of the raw bytes like you showed above is fine. See also my node here.

      and/or be able just to brute-force fix it?

      Iff your data consists entirely a single-byte encoding like the ones I named above, and the only UTF-8 characters that appear in it are BOMs, then the regex you showed in the root node may be acceptable. However, I very much expect that if there's a BOM, then other UTF-8 characters can be present, and if those are mixed with single-byte-encodings, or you've got double-encoded characters, you'll have a tough time picking that apart. But again, you'd need to show us more representative data.

      Edit: Typo fixes.

        I'll try to get something together and paste a hex dump. But: i know that there are nothing but plain lower 128 ASCII characters {I just mentioned ISO-latin out of habit}. It is all data that I entered in and there's no data in the CSV files that isn't something I entered. I have no idea why there's a bom in the middle of the first record..... I'll get the dump