In modern search, our databases can be any modality—text, images, tabular—and we propose to add multimodal search capabilities to OpenSearch through AutoGluon, Amazon’s open-source AutoML software. Through multimodal embeddings, we’re able to perform semantic search and retrieve data based on its contextual meaning. In this session, we’ll discuss how with these embeddings, we can detect anomalous data and assess the similarities between datasets to improve our models.
Rediscover Your Data with a New Multimodal Search Capability
Speakers
James Sharpnack
Senior Applied Scientist, AWS
Xingjian Shi
Senior Applied Scientist, Amazon AI