If you do it like:
perl -ne'print $_ if ! /^<es\s*$/' huge.xml | perl extract.pl -
Then you don't even have to wait for the huge file to be read twice. The time required could well be almost the same as it would be without the filter. Since the filter code likely can run faster than the XML parsing code, the difference in run-time could just be the insignificant time it takes to filter one buffer's worth of XML. It would likely take a bit more CPU (probably less than 2x), but I doubt processing a huge XML file is usually CPU-bound on most systems.
Though, brettski didn't seem to find even that proposal acceptable when I proposed it in chat around the time that the root node was posted.