Social scientists are technically correct to put in the caveats about causation. The thing is, among scientifically educated people, this is OK, because everyone understands the subtleties. For example, the fact that nothing can be absolutely "proven" by statistics, that causation can not be determined from observational studies, and so on. And yet, despite these caveats, it is still possible to observe the world and draw valid conclusions.
The problem arises with people with strong pre-existing beliefs and very little scientific education. People for whom everything must be black or white: either a study definitively proves a causal link, or it proves absolutely nothing. People who really aren't able or willing to look at the totality of the evidence, and use it to evaluate different hypotheses.
Of course, there is another side to this, where some people read the press release from a study that says "X is linked with Y" and come away thinking that it has been "proved" that X causes Y. That is also wrong. But what you see from denialists is a kind of pseudointellectual backlash, where they simply ignore the studies, and just go on believing what their "gut" tells them, satisfied that some slogan like "correlation is not causation" or "ivory tower elitism" allows them to dismiss the empirical evidence.
Also, the standards for scientific proof in hard sciences often aren't appropriate in social sciences, where it's hard to have repeatable controlled experiments. This is related to your point about the drawbacks of quantitative techniques in social sciences, versus the more qualitative approach of historians.
The thing is, we still need to make public policy decisions, so they have to be based on the best available evidence. We can't hope measure the effect of gun proliferation on homicide rates with the same precision that we measure the speed of light, but we can be reasonably confident that there is a substantial effect, certainly confident enough to warrant doing something about it.