MIT can secure cloud-based AI without slowing it down

It’s rather important to secure cloud-based AI systems, especially when they they use sensitive data like photos or medical records. To date, though, that hasn’t been very practical — encrypting the data can render machine learning systems so slow as to be virtually unusable. MIT thankfully has a solution in the form of GAZELLE, a technology that promises to encrypt convolutional neural networks without a dramatic slowdown. The key was to meld two existing techniques in a way that avoids the usual bottlenecks those methods create.

To start, users uploading data to the AI rely on a “garbled circuits” approach that takes their input and sends two distinct inputs to each side of the conversation, hiding data for both the user and the neural network while making the relevant output accessible. That approach would normally be too intensive if it was used for the entire system, though, so MIT uses homomorphic encryption (which both takes and produces encrypted data) for the more demanding computation layers before sending it back to the user. The homomorphic method has to introduce noise in order to work, though, so it’s limited to crunching one layer at a time before transmitting info. In short: MIT is splitting the workload based on what each side does best.

The result leads to performance up to 30 times speedier than what you’d get from conventional methods, and promises to shrink the needed network bandwidth by “an order of magnitude,” according to MIT. That could lead to more uses of internet-based neural networks for handling vital info, rather than forcing companies and institutions to either build expensive local equivalents or forget AI-based systems altogether. Hospitals could teach AI to spot medical issues in MRI scans, for example, and share that technology with others without exposing patient data.

Must Read

error: Content is protected !!