Certainly much has been done in this area. Dave Mitchell did a lot of work in recent years (some of it under a targetted "make perl faster" grant from Booking.com, I believe) that included creating new compound opcodes for certain common patterns. However it is very hard to identify patterns that would benefit from this _in the general case_, and each one requires a lot of work followed by a lot more debugging.
For specific cases, as stevieb says, the starting point is benchmarking to identify where the time is going. Once you have the benchmarks, it's worth spending a lot of time thinking about different algorithms that could improve things - these are the things that could have effects in orders of magnitude, but might require some reengineering of the code - before starting to look at micro-optimizations at the perl level, or maybe rewriting certain core loops in C.