If it were me, I'd put them in directories based on the hashes. I'd put "d41d8cd98f00b204e9800998ecf8427e" in
"d/4/1/d/8/d41d8cd98f00b204e9800998ecf8427e" (for example). At five levels deep, each leaf directory would have an average of three files in it (for three million files), so maybe you want just four levels with an average of 45 files each. The deeper you go, the more room to grow.
I find it hard to believe you'll never do a directory listing. Eventually someone will do one on accident. We had a Linux machine where I work brought to its knees by an 'ls' in a directory with too many files. We thought it had died completely, but it eventually came back.
It's possible that ext3 doesn't have this problem (I don't know), but on some filesystems even a check for existence involves a brute force search through the contents of the directory.
Having looked just now, I see there's an option for 'mke2fs' called "dir_index" which "uses hashed b-trees to speed up lookups in large directories." Also, a "tune2fs -l /dev/sda1" tells me that my filesystem has this feature even though I don't recall asking for it. Maybe it's the default. It might be worth your while to look.