I did a quick google on Postgres pro's and cons. I did just read further in the comments that it's solvable by "adding connection pooling on front."
I'm open to anything really. The boss didn't wanna pony up the cash to get someone in here that could make solid reccomendations....so we're just winging it! One of my colleagues is familiar with Mariadb, so we went with it.
The database holds environmental sample data. Each sample contains just over three million data points. For what I'm describing here, I have to pull just under three million of those points for around 200 samples worth of data. (200-300 should be the normal data load.) That initial pull of data will be around 1.8 billion calculations if the qualifiers are relatively simple. The qualifiers are user definable, so they could range from simple greater than/less than, to various combinations of percentages of different values from the database.
Following that comes this script which will ultimately perform an additional ~49 million calculations on the summary table to find the unique attributes.(A chain of greater than less than qualifiers based on attribute count and category count for each attribute in each category.)