Enabling privacy-preserving AI training on everyday devices
MIT researchers have developed a new method called FTTE that accelerates privacy-preserving AI training on resource-constrained edge devices by 81 percent. The technique improves federated learning by reducing memory and communication demands, enabling more efficient model training across heterogeneous devices like smartwatches and sensors. By using selective parameter updates and asynchronous server aggregation, FTTE maintains high model accuracy while preserving user data privacy. This advancement could expand the use of AI in sensitive, high-stakes fields such as health care and finance.
Opening excerpt (first ~120 words) tap to expand
A new method could bring more accurate and efficient AI models to high-stakes applications like health care and finance, even in under-resourced settings. Adam Zewe | MIT News Publication Date: April 29, 2026 Press Inquiries Press Contact: Abby Abazorius Email: [email protected] Phone: 617-253-2709 MIT News Office Media Download ↓ Download Image Caption: Irene Tenison, Lalana Kagal and Anna Murphy of the Decentralized Information Group (DIG) developed a new method that could bring more accurate and efficient AI models to high-stakes applications like health care and finance.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at MIT News.