Deep neural networks are effective models for solving complex supervised problems with high accuracy for several types of unstructured datasets, such as images, videos, and text. In scenarios with limited resources (e.g., smartphones, smartwatches, etc.), it is a common practice to bootstrap the training step by reusing pre-trained neural networks. This technique is known as fine-tuning. While systems employing fine-tuning can be efficient, they are susceptible to significant attacks because an attacker who obtains access to the trained model itself has full access to personal attributes or other confidential data contained on or accessible by the model. Such systems sometimes attempt to limit this susceptibility by adding noise to the input space or employing an API interface in hopes that any attacker would only have access to the API rather than the embedding vectors and data of the model itself, though these techniques are not always effective. Other systems employ several models so that an attack on a single model does not compromise data across the entire system, but such systems are typically less efficient and/or accurate than systems that employ a single model capable of executing the same tasks.
Accordingly, there is a need for improved devices, systems, and methods that can execute multiple tasks in the same model without exposing all information contained on or accessible by the model during an attack. The disclosed systems and methods for using hash keys to preserve privacy across multiple tasks are directed to these and other considerations.