I'd like to take a more practical look at an aspect of it. Because, juvenile and high-falutin' philosophy notwithstanding, this is a question that we answer every day in our actions. It is worth trying to answer it well.
My usual initial strategy is to work off of recommendations. I've long had success in learning interesting things by asking someone who I believe knows something about the topic to recommend a good book. This is how I've found classics like Winning at New Products, Betrayal of Trust and Information Rules. Even if I don't know anyone personally, I can often get a good lead. For instance see how in How to locate a good reference (was: Poker Probability Processor) I would tackle the problem of finding a good book on poker.
But does that always work? Consider the case of someone who doesn't know Perl who is looking at Perl Black Book. It looks pretty good, right? But as stated at Re: Re: Any Book Recommendation, I'd not recommend it. What's different?
What is different is that Perl is something that I think I know something about. I'm not about to trust random recommendations seen on the web over my own judgement. Furthermore I have enough knowledge on the topic to be aware of important factors (like attention to error reporting and security) which strongly affect quality but are invisible to most readers.
A few years ago, Matt Wright's Script Archives would have made another good example of recommendations going wrong. Today the situation is somewhat better. If you find about Matt Wright and do a google search for him being bad, you'll get lots of cogent criticism. My usual procedure when I get a recommendation is to look for both positive and negative commentary before I act on it. Of course people who aren't so careful could easily still be mislead - but not much can realistically be done about people who only look at one side before making up their minds.
So go off of recommendations. But double-check them however you can. With your own knowledge if at all possible. Is this enough to steer you right?
Unfortunately, no. As Why People Believe Weird Things makes painfully clear, it is very easy to come to believe in something that is objectively rather dubious. (Incidental note about that book, the author got a lot of mail of the form, "I enjoyed your book very much, but you got chapter X wrong - that's actually true." People disagreed on which chapter was mistaken...) From your own "knowledge", you discount anything that supports the mainstream consensus. And you're conversely far more likely to accept whatever fits your beliefs.
Wouldn't it be nice to believe that this just happens to cults and to weird people? Unfortunately it doesn't. As Kuhn pointed out in The Structure of Scientific Revolutions, this is how scientists work. Furthermore it is a good thing. For it is only when you've embedded a belief system about how to think about a topic (let's call that a "paradigm") that you're able to really focus your thinking on that topic and can start coming up with useful ways to test it. Sure, you're probably wrong, but as Francis Bacon pointed out centuries ago, Truth comes out of error more easily than confusion.
Of course errors aren't good, you want misunderstandings to be corrected. Which is why science engages in a pattern of concerted "destruction testing" of its paradigms. Pushing them to the boundaries of their applicability, and looking hard at the anomolies. Eventually this causes one of those infamous "paradigm shifts", where an existing paradigm develops bad enough problems that people are forced to look for a new paradigm.
Or at least so says a simplified version of scientific history. As normally happens, history takes a theme and feeds it back on itself until a smooth stream folds into chaos. Because this pattern of holding on to beliefs, challenging them, and re-evaluating from time to time happens at all levels in science. Ranging from things like classical mechanics down to what genes are important in specific developmental pathways.
For one random example, see my comments on the disagreement between researchers on whether amyloid plaques have anything to do with Alzheimer's disease.
Of course this doesn't just apply to science. Nothing is special about scientists - they are just people. They have no tools to approach generating knowledge that the rest of us lack. Now science is special. Two special things about the scientific process is that it limits itself to questions which (by consensus) there are clear ways to tackle. (The "social sciences" typically don't so limit themselves - which is why most people don't count them as "hard sciences".) And it engages in systematic attempts to push its theories to the limit. Humans don't generally get to choose which questions to have opinions on. And most rarely challenge our own beliefs.
So where do we stand now? To learn about something, a good starting place is to work off of recommendations, analyzed to the best of your knowledge. Unfortunately much of what you think that you know is tied up in self-reinforcing belief systems that you have. There isn't too much that you can do about this inevitable fact - but trying to challenge your own beliefs from time to time.
So what are some of these self-reinforcing belief systems? They include things like belief in alien abductions, The Bible, Evolution, trickle-down economics, and weapons of mass destruction in Iraq.
So how do you test them? Well for one thing, the next time that you are about to reject what someone has to say as "obviously" being wrong, why not stop and ask yourself how you know what you think you do? If you are honest, you might find that your beliefs tie into a self-reinforcing knot, and there is nothing that you could say to really convince someone who disagrees with you.
That's an interesting experience. I highly recommend that everyone try it more often. (Including me.)