FragNet, a Contrastive Learning-Based Transformer Model for Clustering, Interpreting, Visualizing, and Navigating Chemical Space



Shrivastava, Aditya Divyakant and Kell, Douglas B ORCID: 0000-0001-5838-7963
(2021) FragNet, a Contrastive Learning-Based Transformer Model for Clustering, Interpreting, Visualizing, and Navigating Chemical Space. MOLECULES, 26 (7). 2065-.

Access the full-text of this item by clicking on the Open Access link.

Abstract

The question of molecular similarity is core in cheminformatics and is usually assessed via a <i>pairwise</i> comparison based on vectors of properties or molecular fingerprints. We recently exploited variational autoencoders to embed 6M molecules in a chemical space, such that their (Euclidean) distance within the latent space so formed could be assessed within the framework of the entire molecular set. However, the standard objective function used did not seek to manipulate the latent space so as to cluster the molecules based on any perceived similarity. Using a set of some 160,000 molecules of biological relevance, we here bring together three modern elements of deep learning to create a novel and disentangled latent space, viz transformers, contrastive learning, and an embedded autoencoder. The effective dimensionality of the latent space was varied such that clear separation of individual types of molecules could be observed within individual dimensions of the latent space. The capacity of the network was such that many dimensions were not populated at all. As before, we assessed the utility of the representation by comparing clozapine with its near neighbors, and we also did the same for various antibiotics related to flucloxacillin. Transformers, especially when as here coupled with contrastive learning, effectively provide one-shot learning and lead to a successful and disentangled representation of molecular latent spaces that at once uses the entire training set in their construction while allowing "similar" molecules to cluster together in an effective and interpretable way.

Item Type: Article
Uncontrolled Keywords: deep learning, artificial intelligence, generative methods, chemical space, neural networks, transformers, attention, cheminformatics
Divisions: Faculty of Health and Life Sciences
Faculty of Health and Life Sciences > Institute of Systems, Molecular and Integrative Biology
Depositing User: Symplectic Admin
Date Deposited: 12 May 2021 09:48
Last Modified: 06 Feb 2024 08:35
DOI: 10.3390/molecules26072065
Open Access URL: https://doi.org/10.3390/molecules26072065
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3122442