Space-time crystals, an important step toward new optical materials
Peer-Reviewed Publication
Updates every hour. Last Updated: 30-Apr-2025 11:08 ET (30-Apr-2025 15:08 GMT/UTC)
Pretrained large-scale AI models need to ‘forget’ specific information for privacy and computational efficiency, but no methods exist for doing so in black-box vision-language models, where internal details are inaccessible. Now, researchers from Japan addressed this issue through an innovative strategy based on latent context sharing, successfully getting an image classifier to forget multiple classes it was trained on. Their findings could expand the use cases of large-scale AI models while safeguarding end users’ privacy.