Fact-checking has become the de facto solution for fighting fake news online. This research brings attention to the unexpected and diminished effect of fact-checking due to cognitive biases. We experimented (66,870 decisions) comparing the change in users’ stance toward unproven claims before and after being presented with a hypothetical fact- checked condition. The current study shows that claims marked with the ‘Lack of Evidence’ label are perceived similarly as false information unlike other borderline labels such as ‘Mixed Evidence’ or ‘Divided Evidence,’ which indicates the uncertainty-aversion bias in response to insufficient information. Next, users who initially show disapproval toward a claim are less likely to correct their views later than those who initially approve of the same claim when opposite fact- checking labels are shown – an indication of disapproval bias. On average, we confirm that fact-checking helps users correct their views and reduces the circulation of falsehoods by leading them to abandon extreme views. Despite the positive role, the presence of two biases, uncertainty-aversion and disapproval bias, unveils that fact-checking does not always produce the desired user experience and that the outcome varies by the design of fact-checking messages and people’s initial view. These new observations have direct implications for multiple stakeholders, including platforms, policy-makers, and online users.