Entirelly self-supervised
Outperforms previous state-of-the-art competitors
It uses 10 times less parameters
Efficient self-supervised metric information retrieval for COVID19 literature
Sublimer
Efficient self-supervised metric information retrieval for COVID19 literature
Gianluca Moro, Lorenzo Valgimigli
PaperEntirelly self-supervised
Outperforms previous state-of-the-art competitors
It uses 10 times less parameters
Description
The literature on coronaviruses counts more than 300,000 publications. Finding relevant papers concerning arbitrary queries is essential to discovery helpful knowledge. Current best information retrieval (IR) use deep learning approaches and need supervised training sets with labeled data, namely to know a priori the queries and their corresponding relevant papers. Creating such labeled datasets is time-expensive and requires prominent experts’ efforts, resources insufficiently available under a pandemic time pressure. We present a new self-supervised solution, called SUBLIMER, that does not require labels to learn to search on corpora of scientific papers for most relevant against arbitrary queries. SUBLIMER is a novel efficient IR engine trained on the unsupervised COVID-19 Open Research Dataset (CORD19), using deep metric learning. The core point of our self-supervised approach is that it uses no labels, but exploits the bibliography citations from papers to create a latent space where their spatial proximity is a metric of semantic similarity; for this reason, it can also be applied to other domains of papers corpora. SUBLIMER, despite is self-supervised, outperforms the Precision@5 (P@5) and Bpref of the state-of-the-art competitors on CORD19, which, differently from our approach, require both labeled datasets and a number of trainable parameters that is an order of magnitude higher than our.
Keywords: COVID-19; NLP; healthcare; information retrieval; language model; metric learning; self-supervised learning.