The data abuse concern has risen along with the widespread development of Deep Learning Inference Service (DLIS). Mobile users specifically worry about their DLIS input data being taken advantage of in the training of new deep learning (DL) models. Mitigating this new concern is demanding because it requires excellent balancing between data abuse prevention and highly-usable service. Unfortunately, existing works do not meet this unique requirement. In this work, we propose the first data abuse prevention mechanism called DAPter. DAPter is a user-side DLIS-input converter, and its outputs, although still good for inference, can hardly be labeled for new model training. At the core of DAPter is a lightweight generative model trained with a novel loss function to minimize abusable information in the inference input. Moreover, adapting DAPter does not have to change the existed provider backend and DLIS models. We conduct comprehensive experiments with our DAPter prototype on mobile devices and demonstrate that DAPter can substantially raise the bar of data abuse difficulty with little impact on the service quality and overhead.