Organizations are increasingly integrating human-AI decision-making processes. Therefore, it is crucial to make sure humans possess the ability to call out algorithms' biases and errors. Biased algorithms were shown to negatively affect access to loans, hiring processes, judicial decisions, and more. Thus, studying workers' ability to balance reliance on algorithmic recommendations and critical judgment towards them, holds immense importance and potential social gain. In this study, we focused on gig-economy platform workers (MTurk) and simple perceptual judgment tasks, in which algorithmic mistakes are relatively visible. In a series of experiments, we present workers with misleading advice perceived to be the results of AI calculations and measure their conformity to the erroneous recommendations. Our initial results indicate that such algorithmic recommendations hold strong persuasive power, even compared to recommendations that are presented as crowd-based. Our study also explores the effectiveness of mechanisms for reducing workers' conformity in these situations.