First and foremost, watch those weasel words. An article heavily laced with "can, could, masy, might, and possibly" is one you need to dismiss out of hand. Science reaches some pretty firm conclusions, although those conclusions are frequently overturned down the road when more data sets come in. That weasel word stuff is pop science, which has little in common with the real thing.
Second, who paid for the research? Confirmation bias is a thing and you might want to withhold any action based on that article until it is replicated by someone whose paycheck doesn't depend on reaching those conclusions.
With medical studies, you have to consider how large the study was. If it's a longitudinal study, what the duration was. Case in point: the "crack baby" hysteria. A study of infants born to crack smoking mothers showed some pretty horrific effects in the first two weeks. I can attest to how hard it was to watch such babies in the NICU, they were obviously miserable. Eventually, thousands of those kids were followed and the study rewritten at five, ten and fifteen years and by the age of five, there were no ill effects that couldn't be attributed to the effects of poverty. Turns out keeping mothers in poverty is worse for kids than crack was. Sometimes you need to wait years for the data to come in.
Number of people in a study can also be a waving red flag. You need large numbers because there is no way to control all the possible variables when studying people. Not only do we have different habits, some of us turn ornery after a while. Fewer than 100 people in a study, you get Andrew Wakefield (if you don't know who he is, look him up he's a real pip).
Just these few things will help you read a lot more skeptically and save yourself expense and aggravation by disconnecting you from the shrieking headlines.