Skip to main content

The latest research from Google

Sparsity-preserving differentially private training

Large embedding models have emerged as a fundamental tool for various applications in recommendation systems [1, 2] and natural language processing [3, 4, 5]. Such models enable the integration of non-numerical data into deep learning models by mapping categorical or string-valued input attributes with large vocabularies to fixed-length representation vectors using embedding layers. These models are widely deployed in personalized recommendation systems and achieve state-of-the-art performance in language tasks, such as language modeling, sentiment analysis, and question answering. In many such scenarios, privacy is an equally important feature when deploying those models. As a result, various techniques have been proposed to enable private data analysis. Among those, differential privacy (DP) is a widely adopted definition that limits exposure of individual user information while still allowing for the analysis of population-level patterns.

VALID: A perceptually validated virtual avatar library for inclusion and diversity

Google at EMNLP 2023

A new quantum algorithm for classical mechanics with an exponential speedup

Summary report optimization in the Privacy Sandbox Attribution Reporting API

Unsupervised speech-to-speech translation from monolingual data

Improving simulations of clouds and their effects on climate

Open sourcing Project Guideline: A platform for computer vision accessibility technology

Emerging practices for Society-Centered AI

Responsible AI at Google Research: Adversarial testing for generative AI safety

Scaling multimodal understanding to long videos