Molecule-and-text cross-modal representation learning has emerged as a promising direction for enhancing the quality of molecular representation, thereby improving performance in various scientific fields. However, most approaches employ a global alignment approach to learn the knowledge from different modalities that may fail to capture fine-grained information, such as molecule-and-text fragments and stereoisomeric nuances, which is crucial for downstream tasks. Furthermore, it is incapable of modeling such information using a similar global alignment strategy due to the lack of annotations about the fine-grained fragments in the existing dataset.
In this paper, we propose Atomas, a hierarchical molecular representation learning framework that jointly learns representations from SMILES strings and text. We design a Hierarchical Adaptive Alignment model to automatically learn the fine-grained fragment correspondence between two modalities and align these representations at three semantic levels. Atomas’s end-to-end training framework supports understanding and generating molecules, enabling a wider range of downstream tasks.
Atomas achieves superior performance across 12 tasks on 10 datasets, outperforming 10 baseline models thus highlighting the effectiveness and versatility of our method. Scaling experiments further demonstrate Atomas’s robustness and scalability. Moreover, visualization and qualitative analysis, validated by human experts, confirm the chemical relevance of our approach.
@article{zhang2024atomas,
title={Atomas: Hierarchical alignment on molecule-text for unified molecule understanding and generation},
author={Zhang, Yikun and Ye, Geyan and Yuan, Chaohao and Han, Bo and Huang, Long-Kai and Yao, Jianhua and Liu, Wei and Rong, Yu},
journal={arXiv preprint arXiv:2404.16880},
year={2024}
}