We initiate the study of information elicitation mechanisms for a crowd containing both self-interested agents, who respond to incentives, and adversarial agents, who may collude to disrupt the system. Our mechanisms work in the peer prediction setting where ground truth need not be accessible to the mechanism or even exist. We provide a meta-mechanism that reduces the design of peer prediction mechanisms to a related robust learning problem. The resulting mechanisms are $epsilon$-informed truthful, which means truth-telling is the highest paid $epsilon$-Bayesian Nash equilibrium (up to $epsilon$-error) and pays strictly more than uninformative equilibria. The value of $epsilon$ depends on the properties of the robust learning algorithm, and typically limits to $0$ as the number of tasks and or agents increases. We show how to use our meta-mechanism to design mechanisms with provable guarantees in two important crowdsourcing settings even when some agents are self-interested and others are adversarial.