Buckets:

mishig's picture
|
download
raw
81.2 kB

Title: UAlign: Pushing the Limit of Template-free Retrosynthesis Prediction with Unsupervised SMILES Alignment

URL Source: https://arxiv.org/html/2404.00044

Markdown Content: [1]\fnm Yaohui \sur Jin [1]\fnm Yanyan \sur Xu [1]\orgdiv MoE Key Laboratory of Artificial Intelligence, AI Institute, \orgname Shanghai Jiao Tong University, \orgaddress\city Shanghai, \postcode 200240, \state Shanghai, \country China

2]\orgdiv Frontiers Science Center for Transformative Molecules (FSCTM), Zhangjiang Institute for Advanced Study, \orgname Shanghai Jiao Tong University, \orgaddress\city Shanghai, \postcode 200240, \state Shanghai, \country China 3]\orgdiv Department of Computer Science and Engineering, \orgname Shanghai Jiao Tong University, \orgaddress\city Shanghai, \postcode 200240, \state Shanghai, \country China

\fnm Bo \sur Yang \fnm Xin \sur Zhao \fnm Yu \sur Zhang \fnm Fan \sur Nie \fnm Xiaokang \sur Yang * jinyh@sjtu.edu.cnyanyanxu@sjtu.edu.cn* [ [

Abstract

Motivation Retrosynthesis planning poses a formidable challenge in the organic chemical industry, particularly in pharmaceuticals. Single-step retrosynthesis prediction, a crucial step in the planning process, has witnessed a surge in interest in recent years due to advancements in AI for science. Various deep learning-based methods have been proposed for this task in recent years, incorporating diverse levels of additional chemical knowledge dependency.

Results This paper introduces UAlign, a template-free graph-to-sequence pipeline for retrosynthesis prediction. By combining graph neural networks and Transformers, our method can more effectively leverage the inherent graph structure of molecules. Based on the fact that the majority of molecule structures remain unchanged during a chemical reaction, we propose a simple yet effective SMILES alignment technique to facilitate the reuse of unchanged structures for reactant generation. Extensive experiments show that our method substantially outperforms state-of-the-art template-free and semi-template-based approaches. Importantly, our template-free method achieves effectiveness comparable to, or even surpasses, established powerful template-based methods.

Scientific contribution We present a novel graph-to-sequence template-free retrosynthesis prediction pipeline that overcomes the limitations of Transformer-based methods in molecular representation learning and insufficient utilization of chemical information. We propose an unsupervised learning mechanism for establishing product-atom correspondence with reactant SMILES tokens, achieving even better results than supervised SMILES alignment methods. Extensive experiments demonstrate that UAlign significantly outperforms state-of-the-art template-free methods and rivals or surpasses template-based approaches, with up to 5% (top-5) and 5.4% (top-10) increased accuracy over the strongest baseline.

keywords:

Template-Free Retrosynthesis, Deep Learning, Chemical Reactions, Single-step Retrosynthesis Prediction

1 Introduction

Retrosynthesis prediction is a crucial task in organic chemistry, aiding in finding efficient synthetic pathways from target molecules to accessible starting materials. Despite significant advancements in chemical synthesis technology, it still remains a challenge in industries like pharmaceuticals. The extensive search space and the incomplete understanding of chemical reaction mechanisms make retrosynthesis prediction difficult, even for experienced chemists. To address this issue, computer-assisted synthetic planning (CASP) has gained increasing attention in recent years, starting from the seminal work byCorey. This paper focuses on single-step retrosynthesis prediction, which is the fundamental step in CASP. It aims to predict the reactants that can lead to a given product molecule through a single reaction step.

Various deep-learning-based single-step retrosynthesis prediction methods have been proposed in recent years. These methods can be broadly classified into three groups based on their dependency on additional chemical knowledge: template-based, semi-template-based and template-free methods. Template-based methods[9, 37, 7, 5] require an extra database of reaction templates. They frame the retrosynthesis prediction as a classification or retrieval problem for reaction templates suitable for the given product molecule to be synthesized. Among these solutions, Retrosim[7] utilizes molecular similarity to rank reaction templates; LocalRetro[5] and GLN[9] use graph neural networks to model the relationship between reaction templates and molecules to predict the most suitable reaction template; RetroKNN[37] further improves upon LocalRetro by addressing the issue of data imbalance using K-nearest neighbors (KNN). Template-based methods have strong interpretability and can accurately predict reactants. However, these methods are often unable to cover all cases and suffer from poor scalability due to limitations imposed by the template database.

To overcome the limitations faced by template-based methods, researchers have turned to generative models. Semi-template-based methods incorporate chemical knowledge into generative models with the help of chemical toolkits like RDKit[16], breaking free from the limitations imposed by reaction templates. The key idea of most semi-template-based methods[26, 27, 35, 6, 38] is to first convert the product into synthons based on reaction center identification and then complete the synthons into reactants. Graph neural networks are commonly used for synthon prediction, followed by leaving group attachment[27, 6], conditional graph generation[26], or SMILES generation[38] for reactant completion. Apart from all above, RetroPrime[35] utilizes two independent Transformers to accomplish synthon prediction and reactant generation as separate tasks.

Semi-template-based methods to a certain extent are more in line with chemical intuition. However, these methods increase the complexity of inference and training as they break down retrosynthesis into two subtasks. Failures in synthon prediction directly affect subsequent reactant completion and overall performance. Besides, methods based on leaving group necessitates an extra leaving group database. This requirement, akin to template-based approaches, imposes limitations on the model’s scalability.

As generative models, Template-free methods opt to generate reactants directly from the given products. In comparison to generating graph structures, SMILES provides a way to represent molecules as strings. Taking advantage of this, most template-free methods[28, 15, 34, 42, 25] use Transformer models to translate between product SMILES and reactants SMILES. In particular, Graph2SMILES[29] replaces the Transformer encoder with a graph neural network, resulting in a permutation-invariant pipeline. There are also methods[21, 40] formulates the generation of reactants as a series of graph generation or editing operation and solve it auto-regressively. Existing template-free methods generally follows an auto-regressive generation strategy and use beam search for the generation process. Consequently, preserving a level of diversity in the resultant outputs has emerged as a critical consideration for template-free methods[33]. Due to the use of SMILES as input and output, most of template-free methods often overlook the rich topological and chemical bond information present in molecular graphs. Moreover, as reactants molecules need to be generated from scratch, template-free methods frequently suffer from validity issues and fail to leverage an important property of retrosynthesis prediction, i.e., the presence of many common substructures between products and reactants.

In this paper, we focus on the template-free generative approach for retrosynthesis prediction. Existing sequence-to-sequence methods have limitations in extracting robust molecular representations. They overlook the abundance of topological information and chemical bonds, and lack the ability to utilize atom descriptors as rich as those in graph-based methods. Furthermore, template-free methods overlook the fact that the molecular graph topology remains largely unaltered from reactants to products during chemical reactions, as they generate reactants from scratch. While there are methods that attempt to solve this problem using supervised SMILES alignment, they require complex data annotation and impact model training. Given these limitations, the following question naturally arises:

Can we effectively leverage the structural information of product molecules using a much simpler approach?

To address these issues and further enhance template-free methods, we propose a novel graph-to-sequence pipeline called UAlign. Our approach employs a specifically designed graph neural network as an encoder, incorporating information from chemical bonds during message passing to create more powerful embeddings for the decoder. We introduce an unsupervised SMILES alignment mechanism that establishes associations between product atoms and reactant SMILES tokens which reduces the complexity of SMILES generation and enables the model to focus on learning chemical knowledge. Our model outperforms existing template-free methods by a large margin and demonstrates comparable performance against template-based methods.

2 Methods

We introduce UAlign, a novel single-step retrosynthesis prediction model based on an encoder-decoder architecture, as demonstrated in Fig.1. It’s a fully template-free method without any molecule editing operation using RDKit[16]. We propose a specially designed variant of Graph Attention Network, which incorporates the information of chemical bonds to enhance the capability of capturing the structural characteristics of molecules.

Image 1: Refer to caption

Figure 1: Overview of UAlign: Given a product molecule graph P 𝑃 P italic_P and one of its DFS order O P subscript 𝑂 𝑃 O_{P}italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT, the graph is first fed into the graph neural network called EGAT+ to obtain node features H 𝐻 H italic_H. Then the positional encoding is added to H 𝐻 H italic_H according to the given DFS order O P subscript 𝑂 𝑃 O_{P}italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT to generate the order-aware node features H^^𝐻\hat{H}over^ start_ARG italic_H end_ARG. Finally the decoder takes H^^𝐻\hat{H}over^ start_ARG italic_H end_ARG as input and generate the SMILES of reactants auto-regressively.

2.1 Preliminary

A molecule can be represented as a graph, denoted by G=(V,E)𝐺 𝑉 𝐸 G=(V,E)italic_G = ( italic_V , italic_E ), where V 𝑉 V italic_V represents the atoms and E 𝐸 E italic_E represents the chemical bonds. The SMILES representation of a molecule can be obtained by performing a depth-first search (DFS) starting from any arbitrary atom in the molecule graph. Given a molecule graph G=(V,E)𝐺 𝑉 𝐸 G=(V,E)italic_G = ( italic_V , italic_E ), we can generate multiple DFS orders and each DFS order corresponds to a SMILES representation of the graph. Denoted the set of all possible DFS orders as 𝒟⁢(V)⊆𝒫⁢(V)𝒟 𝑉 𝒫 𝑉\mathcal{D}(V)\subseteq\mathcal{P}(V)caligraphic_D ( italic_V ) ⊆ caligraphic_P ( italic_V ), 𝒫⁢(V)𝒫 𝑉\mathcal{P}(V)caligraphic_P ( italic_V ) represents all permutations of the set of atoms V 𝑉 V italic_V. For each DFS order O∈𝒟⁢(V)𝑂 𝒟 𝑉 O\in\mathcal{D}(V)italic_O ∈ caligraphic_D ( italic_V ), we denoted its corresponding SMILES as S⁢m⁢i⁢l⁢e⁢s⁢(G,O)𝑆 𝑚 𝑖 𝑙 𝑒 𝑠 𝐺 𝑂 Smiles(G,O)italic_S italic_m italic_i italic_l italic_e italic_s ( italic_G , italic_O ), which lists all atoms in the order dictated by O 𝑂 O italic_O. To facilitate our subsequent elaboration, we refer to the position of an atom a 𝑎 a italic_a in the order O 𝑂 O italic_O as its rank, denoted as r⁢a⁢n⁢k⁢(a,O)𝑟 𝑎 𝑛 𝑘 𝑎 𝑂 rank(a,O)italic_r italic_a italic_n italic_k ( italic_a , italic_O ). The atom with the minimal rank given order O 𝑂 O italic_O is then defined as the root atom, denoted as r⁢o⁢o⁢t⁢(G,O)𝑟 𝑜 𝑜 𝑡 𝐺 𝑂 root(G,O)italic_r italic_o italic_o italic_t ( italic_G , italic_O ).

2.2 EGAT+

Chemical bonds play a significant role in determining the properties of molecules and contain valuable information. Previous studies[12, 39, 20] have demonstrated that incorporating edge information into graph neural networks can greatly enhance their ability to represent molecular structures. To fully leverage the information brought by chemical bonds, we propose a modified version of the Graph Attention Network (GAT)[32] called EGAT+.

Our proposed model explicitly incorporates edge features, which represent the information derived from chemical bonds, into the message passing process. During each iteration of message passing, the EGAT+ applies self-attention to each node and its one-hop neighbors to calculate attention coefficient according to both node features and edge features. It then aggregates the both node and edge features of these neighbors, considering the attention coefficients, to update the node features. Denote the node feature of atom u 𝑢 u italic_u as h u(k)subscript superscript ℎ 𝑘 𝑢 h^{(k)}{u}italic_h start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT and the edge feature between atom u 𝑢 u italic_u and v 𝑣 v italic_v as e u,v(k)superscript subscript 𝑒 𝑢 𝑣 𝑘 e{u,v}^{(k)}italic_e start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT at k 𝑘 k italic_k-th iteration of message passing. In math, the message passing mechanism can be written as

eu,v(k)superscript subscript𝑒 𝑢 𝑣 𝑘\displaystyle\tilde{e}{u,v}^{(k)}over~ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT=FFN e(k)⁢(e u,v(k)),absent subscript superscript FFN 𝑘 𝑒 superscript subscript 𝑒 𝑢 𝑣 𝑘\displaystyle=\mathrm{FFN}^{(k)}{e}(e_{u,v}^{(k)}),= roman_FFN start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT ( italic_e start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT ) ,(1) hu(k)superscript subscriptℎ 𝑢 𝑘\displaystyle\tilde{h}{u}^{(k)}over~ start_ARG italic_h end_ARG start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT=FFN n(k)⁢(h u(k)),absent subscript superscript FFN 𝑘 𝑛 superscript subscript ℎ 𝑢 𝑘\displaystyle=\mathrm{FFN}^{(k)}{n}(h_{u}^{(k)}),= roman_FFN start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT ) , c u,v subscript 𝑐 𝑢 𝑣\displaystyle c_{u,v}italic_c start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT=𝐚 T⁢[hu(k)⁢‖hv(k)‖⁢eu,v(k)],absent superscript 𝐚 𝑇 delimited-[]superscript subscriptℎ 𝑢 𝑘 norm superscript subscriptℎ 𝑣 𝑘 superscript subscript𝑒 𝑢 𝑣 𝑘\displaystyle=\mathbf{a}^{T}[\tilde{h}{u}^{(k)}|\tilde{h}{v}^{(k)}|\tilde{% e}{u,v}^{(k)}],= bold_a start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT [ over~ start_ARG italic_h end_ARG start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT ∥ over~ start_ARG italic_h end_ARG start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT ∥ over~ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT ] , α u,v subscript 𝛼 𝑢 𝑣\displaystyle\alpha{u,v}italic_α start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT=exp⁡(LeakyReLU⁢(c u,v))∑v′∈𝒩⁢(u)∪{u}exp⁡(LeakyReLU⁢(c u,v′)),absent LeakyReLU subscript 𝑐 𝑢 𝑣 subscript superscript 𝑣′𝒩 𝑢 𝑢 LeakyReLU subscript 𝑐 𝑢 superscript 𝑣′\displaystyle=\frac{\exp(\mathrm{LeakyReLU}(c_{u,v}))}{\sum_{v^{\prime}\in% \mathcal{N}(u)\cup{u}}\exp(\mathrm{LeakyReLU}(c_{u,v^{\prime}}))},= divide start_ARG roman_exp ( roman_LeakyReLU ( italic_c start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT ) ) end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_N ( italic_u ) ∪ { italic_u } end_POSTSUBSCRIPT roman_exp ( roman_LeakyReLU ( italic_c start_POSTSUBSCRIPT italic_u , italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) end_ARG , h u(k+1)subscript superscript ℎ 𝑘 1 𝑢\displaystyle h^{(k+1)}{u}italic_h start_POSTSUPERSCRIPT ( italic_k + 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT=∑v∈𝒩⁢(u)∪{u}α u,v⁢(hu(k)+eu,v(k)),absent subscript 𝑣 𝒩 𝑢 𝑢 subscript 𝛼 𝑢 𝑣 superscript subscriptℎ 𝑢 𝑘 superscript subscript𝑒 𝑢 𝑣 𝑘\displaystyle=\sum{v\in\mathcal{N}(u)\cup{u}}\alpha_{u,v}\left(\tilde{h}{u% }^{(k)}+\tilde{e}{u,v}^{(k)}\right),= ∑ start_POSTSUBSCRIPT italic_v ∈ caligraphic_N ( italic_u ) ∪ { italic_u } end_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT ( over~ start_ARG italic_h end_ARG start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT + over~ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT ) , e u,v(k+1)subscript superscript 𝑒 𝑘 1 𝑢 𝑣\displaystyle e^{(k+1)}{u,v}italic_e start_POSTSUPERSCRIPT ( italic_k + 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT=FFN m(k)⁢([h u(k+1)⁢‖h v(k+1)‖⁢e u,v(k)]),u≠v,formulae-sequence absent superscript subscript FFN 𝑚 𝑘 delimited-[]superscript subscript ℎ 𝑢 𝑘 1 norm superscript subscript ℎ 𝑣 𝑘 1 subscript superscript 𝑒 𝑘 𝑢 𝑣 𝑢 𝑣\displaystyle=\mathrm{FFN}{m}^{(k)}([h_{u}^{(k+1)}|h_{v}^{(k+1)}|e^{(k)}_{u% ,v}]),u\neq v,= roman_FFN start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT ( [ italic_h start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k + 1 ) end_POSTSUPERSCRIPT ∥ italic_h start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k + 1 ) end_POSTSUPERSCRIPT ∥ italic_e start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT ] ) , italic_u ≠ italic_v ,

where FFN m(k)superscript subscript FFN 𝑚 𝑘\mathrm{FFN}{m}^{(k)}roman_FFN start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT, FFN e(k)subscript superscript FFN 𝑘 𝑒\mathrm{FFN}^{(k)}{e}roman_FFN start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT and FFN n(k)subscript superscript FFN 𝑘 𝑛\mathrm{FFN}^{(k)}{n}roman_FFN start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT are three different feed forward networks, 𝐚 𝐚\mathbf{a}bold_a is a learnable parameter, 𝒩⁢(u)𝒩 𝑢\mathcal{N}(u)caligraphic_N ( italic_u ) denotes the one-hop neighbors of node u 𝑢 u italic_u and ∥∥|∥ denotes the concatenation operation. Since there are no chemical bonds with the same beginning and ending atoms, the e u,u(k)subscript superscript 𝑒 𝑘 𝑢 𝑢 e^{(k)}{u,u}italic_e start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_u , italic_u end_POSTSUBSCRIPT is also set as a learnable parameter shared among all atoms. The residual connection and layer normalization[2] are applied to prevent over-smoothing while enlarging the receptive field of the model[36].

The initial node features h u(0)superscript subscript ℎ 𝑢 0 h_{u}^{(0)}italic_h start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT and edge features e u,v(0)superscript subscript 𝑒 𝑢 𝑣 0 e_{u,v}^{(0)}italic_e start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT are determined via several chemical property descriptors, whose details are shown in Supplementary Sec. 6. After K 𝐾 K italic_K iterations of message passing, we can obtain the encoded features h u(K)subscript superscript ℎ 𝐾 𝑢 h^{(K)}{u}italic_h start_POSTSUPERSCRIPT ( italic_K ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT of all atoms and make up the output H∈ℝ V P×d 𝐻 superscript ℝ subscript 𝑉 𝑃 𝑑 H\in\mathbb{R}^{V{P}\times d}italic_H ∈ blackboard_R start_POSTSUPERSCRIPT italic_V start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT × italic_d end_POSTSUPERSCRIPT of the encoder, where d 𝑑 d italic_d denotes the embedding size.

2.3 SMILES Alignment

For single-step retrosynthesis prediction, a significant proportion of structures are shared between product molecules and reactant molecules[34, 43]. However, SMILES-based methods often have to generate the reactant SMILES from scratch, even if most of the structures of reactants are the same as those of the products. This results in the underutilization of input information and becomes the bottleneck of template-free retrosynthesis prediction methods. There are methods[34, 25] addressing this issue through supervised SMILES alignment, which involves adding supervised information to establish the correspondence between input and output tokens through cross-attention over the input and predicted tokens. This supervised training approach not only requires complex data annotation algorithms but also limits the diversity of the model’s attention map, thereby further affecting the model’s performance. To address the above-mentioned issues, we propose the unsupervised SMILES alignment method as follows.

Assuming we can identify the location of each product atom in the reactants’ SMILES and provide it to the model, a natural correspondence can be established between the input and output atoms. However, during the inference process, revealing this information would lead to label leakage, which is not permitted. Therefore, we propose the following modification: when providing an order of product atoms, we expect the model to generate atom tokens in the reactants’ SMILES in this given order as closely as possible. By doing so, we can establish a correspondence between the product atoms and the reactants’ SMILES tokens using unsupervised methods without leaking any labels. We refer to this type of reactants’ SMILES, which aims to preserve the given order of atom tokens as much as possible, as order-preserving reactant SMILES. Note that SMILES represents atoms in a molecule according to a certain DFS order, the provided order should also be a DFS order for the product molecule.

The generation of order-preserving reactant SMILES will be introduced as follows. Given the product molecule P=(V P,E P)𝑃 subscript 𝑉 𝑃 subscript 𝐸 𝑃 P=(V_{P},E_{P})italic_P = ( italic_V start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ) with a DFS order O P∈𝒟⁢(V P)subscript 𝑂 𝑃 𝒟 subscript 𝑉 𝑃 O_{P}\in\mathcal{D}(V_{P})italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ∈ caligraphic_D ( italic_V start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ) and the corresponding set of reactant molecules ℛ={R 1,R 2,…⁢R l}ℛ subscript 𝑅 1 subscript 𝑅 2…subscript 𝑅 𝑙\mathcal{R}={R_{1},R_{2},\ldots R_{l}}caligraphic_R = { italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … italic_R start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT }, for each reactant R=(V R,E R)∈ℛ 𝑅 subscript 𝑉 𝑅 subscript 𝐸 𝑅 ℛ R=(V_{R},E_{R})\in\mathcal{R}italic_R = ( italic_V start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ) ∈ caligraphic_R, we can find a depth-first order O R∈𝒟⁢(V R)subscript 𝑂 𝑅 𝒟 subscript 𝑉 𝑅 O_{R}\in\mathcal{D}(V_{R})italic_O start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ∈ caligraphic_D ( italic_V start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ) that has a nearly consistent atomic appearance sequence with O P subscript 𝑂 𝑃 O_{P}italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT as the product and reactants have similar structures. For convenience, we name such a order as O P subscript 𝑂 𝑃 O_{P}italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT-corresponding order of R 𝑅 R italic_R and denote it as C⁢O⁢(R,O P)𝐶 𝑂 𝑅 subscript 𝑂 𝑃 CO(R,O_{P})italic_C italic_O ( italic_R , italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ). Mathematically, it’s defined as

C⁢O⁢(R,O P)=arg⁡min o∈𝒟⁢(R)⁢∑i∈O P∩o∑j∈O P∩o i⁢n⁢v⁢(i,j,O P,o),𝐶 𝑂 𝑅 subscript 𝑂 𝑃 subscript 𝑜 𝒟 𝑅 subscript 𝑖 subscript 𝑂 𝑃 𝑜 subscript 𝑗 subscript 𝑂 𝑃 𝑜 𝑖 𝑛 𝑣 𝑖 𝑗 subscript 𝑂 𝑃 𝑜 CO(R,O_{P})=\arg\min_{o\in\mathcal{D}(R)}\sum_{i\in O_{P}\cap o}\sum_{j\in O_{% P}\cap o}inv(i,j,O_{P},o),italic_C italic_O ( italic_R , italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ) = roman_arg roman_min start_POSTSUBSCRIPT italic_o ∈ caligraphic_D ( italic_R ) end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i ∈ italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ∩ italic_o end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j ∈ italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ∩ italic_o end_POSTSUBSCRIPT italic_i italic_n italic_v ( italic_i , italic_j , italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT , italic_o ) ,(2)

where the value of i⁢n⁢v⁢(i,j,O P,o)𝑖 𝑛 𝑣 𝑖 𝑗 subscript 𝑂 𝑃 𝑜 inv(i,j,O_{P},o)italic_i italic_n italic_v ( italic_i , italic_j , italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT , italic_o ) equals 1 1 1 1 if and only if r⁢a⁢n⁢k⁢(i,O P)<r⁢a⁢n⁢k⁢(j,O P)𝑟 𝑎 𝑛 𝑘 𝑖 subscript 𝑂 𝑃 𝑟 𝑎 𝑛 𝑘 𝑗 subscript 𝑂 𝑃 rank(i,O_{P})<rank(j,O_{P})italic_r italic_a italic_n italic_k ( italic_i , italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ) < italic_r italic_a italic_n italic_k ( italic_j , italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ) and r⁢a⁢n⁢k⁢(i,o)>r⁢a⁢n⁢k⁢(j,o)𝑟 𝑎 𝑛 𝑘 𝑖 𝑜 𝑟 𝑎 𝑛 𝑘 𝑗 𝑜 rank(i,o)>rank(j,o)italic_r italic_a italic_n italic_k ( italic_i , italic_o ) > italic_r italic_a italic_n italic_k ( italic_j , italic_o ), and equals 0 0 otherwise. We sort the reactants ℛ ℛ\mathcal{R}caligraphic_R according to r⁢a⁢n⁢k⁢(r⁢o⁢o⁢t⁢(R,C⁢O⁢(R,O P)),O P)𝑟 𝑎 𝑛 𝑘 𝑟 𝑜 𝑜 𝑡 𝑅 𝐶 𝑂 𝑅 subscript 𝑂 𝑃 subscript 𝑂 𝑃 rank(root(R,CO(R,O_{P})),O_{P})italic_r italic_a italic_n italic_k ( italic_r italic_o italic_o italic_t ( italic_R , italic_C italic_O ( italic_R , italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ) ) , italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ) in ascending order. Then we generate SMILES for each reactant molecule using its O P subscript 𝑂 𝑃 O_{P}italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT-corresponding order and join them together using “.” to obtain order-preserving reactant SMILES.

For further discussion, we denote the order-preserving reactant SMILES given the reactant molecules ℛ ℛ\mathcal{R}caligraphic_R and a DFS order O 𝑂 O italic_O of product as O⁢P⁢S⁢m⁢i⁢l⁢e⁢s⁢(ℛ,O)𝑂 𝑃 𝑆 𝑚 𝑖 𝑙 𝑒 𝑠 ℛ 𝑂 OPSmiles(\mathcal{R},O)italic_O italic_P italic_S italic_m italic_i italic_l italic_e italic_s ( caligraphic_R , italic_O ). An example of the process to generate order-preserving reactants SMILES is shown in Fig.2. The detailed implementations are presented in Supplementary Sec. 5.1.

Image 2: Refer to caption

Figure 2: An example of the process to generate order-preserving reactants SMILES. The atom mapping numbers shown on the figure are included only for clearer explanation and will be removed in our implementation to prevent any label leakage.

2.4 Decoder

The decoder takes the node features H∈ℝ V P×d 𝐻 superscript ℝ subscript 𝑉 𝑃 𝑑 H\in\mathbb{R}^{V_{P}\times d}italic_H ∈ blackboard_R start_POSTSUPERSCRIPT italic_V start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT × italic_d end_POSTSUPERSCRIPT that are generated from the encoder, as well as the given DFS order O P subscript 𝑂 𝑃 O_{P}italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT for the product molecule graph as input. We use the vanilla Transformer decoder[31] as our decoder. As mentioned in Sec.2.3, the order information of product atoms are required for SMILES alignment. However, the Transformer decoder is permutation-invariant to memory[29, 17], meaning it is not sensitive to the order of the features from encoder. This implies that directly performing cross-attention over H 𝐻 H italic_H may not effectively capture the relationship between product atoms and reactant SMILES tokens. To address this problem, we introduce position encoding to the node features based on the rank of each atom in the given DFS order O P subscript 𝑂 𝑃 O_{P}italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT to generate order-aware node features H^^𝐻\hat{H}over^ start_ARG italic_H end_ARG. Then given an input embedding sequence Z∈ℝ m×d 𝑍 superscript ℝ 𝑚 𝑑 Z\in\mathbb{R}^{m\times d}italic_Z ∈ blackboard_R start_POSTSUPERSCRIPT italic_m × italic_d end_POSTSUPERSCRIPT, the Transformer decoder layer utilizes the order-aware node features as keys and values in all the cross-attention layers. This process ultimately generates the decoded embeddings Z^∈ℝ m×d^𝑍 superscript ℝ 𝑚 𝑑\hat{Z}\in\mathbb{R}^{m\times d}over^ start_ARG italic_Z end_ARG ∈ blackboard_R start_POSTSUPERSCRIPT italic_m × italic_d end_POSTSUPERSCRIPT. These embeddings are then fed into feed-forward layers FFN 1:ℝ d→ℝ T:subscript FFN 1→superscript ℝ 𝑑 superscript ℝ 𝑇\mathrm{FFN_{1}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{T}roman_FFN start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT to predict the tokens T^^𝑇\hat{T}over^ start_ARG italic_T end_ARG that should be generated. In summary, the decoder can be mathematically expressed as

H^^𝐻\displaystyle\hat{H}over^ start_ARG italic_H end_ARG=H+PE⁢(O P),absent 𝐻 PE subscript 𝑂 𝑃\displaystyle=H+\mathrm{PE}(O_{P}),= italic_H + roman_PE ( italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ) ,(3) Z^^𝑍\displaystyle\hat{Z}over^ start_ARG italic_Z end_ARG=TransformerDecoder⁢(Z,H^),absent TransformerDecoder 𝑍^𝐻\displaystyle=\mathrm{TransformerDecoder}(Z,\hat{H}),= roman_TransformerDecoder ( italic_Z , over^ start_ARG italic_H end_ARG ) , T^^𝑇\displaystyle\hat{T}over^ start_ARG italic_T end_ARG=FFN 1⁢(Z^).absent subscript FFN 1^𝑍\displaystyle=\mathrm{FFN_{1}}(\hat{Z}).= roman_FFN start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( over^ start_ARG italic_Z end_ARG ) .

2.5 Two stage training

There is a significant distribution shift between graphs and SMILES representations. Moreover, our model is specifically designed to generate non-canonical SMILES, which may contain more complex patterns compared to canonical SMILES. To achieve this goal, we propose a two-stage training strategy in this paper. The first stage aims to align the distributions between two distinct modalities: SMILES and molecular graphs, while enabling the model to learn the patterns of non-canonical SMILES. Given a molecule graph M 𝑀 M italic_M and one of its possible DFS orders O M subscript 𝑂 𝑀 O_{M}italic_O start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT, the training task is to translate the graph into the corresponding SMILES representation based on the given order O M subscript 𝑂 𝑀 O_{M}italic_O start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT. In detail, this is reached by training the model to generate S⁢m⁢i⁢l⁢e⁢s⁢(M,O M)𝑆 𝑚 𝑖 𝑙 𝑒 𝑠 𝑀 subscript 𝑂 𝑀 Smiles(M,O_{M})italic_S italic_m italic_i italic_l italic_e italic_s ( italic_M , italic_O start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ) given molecule M 𝑀 M italic_M and DFS order O M subscript 𝑂 𝑀 O_{M}italic_O start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT.

Once the first stage training converges, we proceed to the second stage, which focuses on retrosynthesis prediction. In this stage, the model is trained using the order-preserving reactant SMILES as targets. Given a product molecule graph P 𝑃 P italic_P, a possible DFS order O P subscript 𝑂 𝑃 O_{P}italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT, and a set of reactants ℛ ℛ\mathcal{R}caligraphic_R, the model is expected to generate O⁢P⁢S⁢m⁢i⁢l⁢e⁢s⁢(ℛ,O P)𝑂 𝑃 𝑆 𝑚 𝑖 𝑙 𝑒 𝑠 ℛ subscript 𝑂 𝑃 OPSmiles(\mathcal{R},O_{P})italic_O italic_P italic_S italic_m italic_i italic_l italic_e italic_s ( caligraphic_R , italic_O start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ).

2.6 Data Augmentation

Different from those Transformer-based methods[34, 28, 25] taking SMILES as input and canonical SMILES as target, our method takes a graph as input and is trained with non-canonical SMILES. That means the previous SMILES augmentation tricks are not suitable for us. Similar to[34], we choose to augment the training data on-the-fly.

For the first stage, at each iteration, for each molecule M=(V M,E M)𝑀 subscript 𝑉 𝑀 subscript 𝐸 𝑀 M=(V_{M},E_{M})italic_M = ( italic_V start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ), we have a 50% chance of using a random DFS order O M subscript 𝑂 𝑀 O_{M}italic_O start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT as the input for the model, and using the corresponding S⁢m⁢i⁢l⁢e⁢s⁢(M,O M)𝑆 𝑚 𝑖 𝑙 𝑒 𝑠 𝑀 subscript 𝑂 𝑀 Smiles(M,O_{M})italic_S italic_m italic_i italic_l italic_e italic_s ( italic_M , italic_O start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ) as the training target. For the other 50%, we randomly select another molecule M′=(V M′,E M′)superscript 𝑀′subscript 𝑉 superscript 𝑀′subscript 𝐸 superscript 𝑀′M^{\prime}=(V_{M^{\prime}},E_{M^{\prime}})italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = ( italic_V start_POSTSUBSCRIPT italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) from the dataset to form a new molecular graph M=(V M∪V M′,E M∪E M′)𝑀 subscript 𝑉 𝑀 subscript 𝑉 superscript 𝑀′subscript 𝐸 𝑀 subscript 𝐸 superscript 𝑀′\tilde{M}=(V_{M}\cup V_{M^{\prime}},E_{M}\cup E_{M^{\prime}})over~ start_ARG italic_M end_ARG = ( italic_V start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ∪ italic_V start_POSTSUBSCRIPT italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ∪ italic_E start_POSTSUBSCRIPT italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ), and find the DFS order O Msubscript 𝑂𝑀 O_{\tilde{M}}italic_O start_POSTSUBSCRIPT over~ start_ARG italic_M end_ARG end_POSTSUBSCRIPT that can generate canonical SMILES for M𝑀\tilde{M}over~ start_ARG italic_M end_ARG. M𝑀\tilde{M}over~ start_ARG italic_M end_ARG and O Msubscript 𝑂𝑀 O_{\tilde{M}}italic_O start_POSTSUBSCRIPT over~ start_ARG italic_M end_ARG end_POSTSUBSCRIPT are then fed into the model and the target is set as the canonical SMILES of M~~𝑀\tilde{M}over~ start_ARG italic_M end_ARG. Such an augmentation method enables the model to output the atom tokens according to the given DFS order and be aware of different components within a graph.

For the second stage, at each iteration, for each product molecule P=(V P,E P)𝑃 subscript 𝑉 𝑃 subscript 𝐸 𝑃 P=(V_{P},E_{P})italic_P = ( italic_V start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ), we have a 50% probability of using a random DFS order as input, and for the remaining part, we use the DFS order capable of producing canonical SMILES for product as input. The target used for training is the order-preserving reactant SMILES generated based on the input DFS order. This data augmentation method allows the model to focus more on the DFS order for canonical product SMILES while also noticing the correspondence between product atoms and the output SMILES tokens.

2.7 Loss

Both stage of training can considered as a kind of translation between graphs and SMILES, thus we use the loss widely used for auto-regressive language generation models for training. Denote the training target as T={t 1,t 2,…,t n}𝑇 subscript 𝑡 1 subscript 𝑡 2…subscript 𝑡 𝑛 T={t_{1},t_{2},\ldots,t_{n}}italic_T = { italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } and the output of the model T^={t^1,t^2,…,t^n}^𝑇 subscript^𝑡 1 subscript^𝑡 2…subscript^𝑡 𝑛\hat{T}={\hat{t}{1},\hat{t}{2},\ldots,\hat{t}_{n}}over^ start_ARG italic_T end_ARG = { over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }, the loss can be written as

ℒ=∑i=1 n l c⁢l⁢s⁢(t^i,t i),ℒ superscript subscript 𝑖 1 𝑛 subscript 𝑙 𝑐 𝑙 𝑠 subscript^𝑡 𝑖 subscript 𝑡 𝑖\mathcal{L}=\sum_{i=1}^{n}l_{cls}(\hat{t}{i},t{i}),caligraphic_L = ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_l start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT ( over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ,(4)

where l c⁢l⁢s⁢(⋅)subscript 𝑙 𝑐 𝑙 𝑠⋅l_{cls}(\cdot)italic_l start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT ( ⋅ ) is the classification loss.

3 Results and Discussion

In this section, we conduct extensive experiments to make a comprehensive evaluation of our proposed UAlign.

3.1 Evaluation Protocol

Benchmark Datasets. We adopt three datasets for evaluation: (1) USPTO-50K consists of 50,016 atom-mapped reactions grouped into 10 different classes; (2) USPTO-FULL comprises 1,013,118 atom-mapped reactions without any reaction class information. (3) USPTO-MIT consists of 479,035 atom-mapped reactions without any reaction class information. To ensure a fair comparison, we adopt the same training/validation/test splits as those in a previous study[9] for USPTO-50K and USPTO-FULL datasets. The training/validation/test splits is aligned with the previous study[14]. The detailed data processing procedure and the statistical information of the processed dataset are presented in Supplementary Sec. 2.

Metrics. We utilize the following three evaluation metrics for evaluation: top-k 𝑘 k italic_k accuracy, top-k 𝑘 k italic_k SMILES validity, and top-k 𝑘 k italic_k round-trip accuracy. The detailed definitions for three metrics are provided in Supplementary Sec. 3.

3.2 Performance Comparison

Table 1: Top-k 𝑘 k italic_k accuracy for retrosynthesis prediction on USPTO-50K. * indicates the model with SMILES augmentation. For comparison purpose, the Aug. Transformer is evaluated without the test augmentation. Best performance of each model type is in bold.

Model Top-k 𝑘 k italic_k accuracy (%) Reaction class known Reaction class unknown 1 3 5 10 1 3 5 10 Template-Based RetroSim[7]52.9 73.8 81.2 88.1 37.3 54.7 63.3 74.1 NeuralSym[24]55.3 76.0 81.4 85.1 44.4 65.3 72.4 78.9 GLN[9]64.2 79.1 85.2 90.0 52.5 69.0 75.6 83.7 LocalRetro[5]63.9 86.8 92.4 96.3 53.4 77.5 85.9 92.4 RetroKNN [37]66.7 88.2 93.6 96.6 57.2 78.9 86.4 92.7 Semi-Template-Based RetroXpert*[38]62.1 75.8 78.5 80.9 50.4 61.1 62.3 63.4 G2G[26]61.0 81.3 86.0 88.7 48.9 67.6 72.5 75.5 GraphRetro[27]63.9 81.5 85.2 88.1 53.7 68.3 72.2 75.5 RetroPrime*[35]64.8 81.6 85.0 86.9 51.4 70.8 74.0 76.1 Template-Free Transformer[31]57.1 71.5 75.0 77.7 42.4 58.6 63.8 67.7 SCROP[42]59.0 74.8 78.1 81.1 43.7 60.0 65.2 68.7 Liu’s Seq2seq[19]----37.4 52.4 57.0 61.7 Tied Transformer[15]----47.1 67.1 73.1 76.3 Aug. Transformer* [28]----48.3-73.4 77.4 MEGAN[21]60.7 82.0 87.5 91.6 48.1 70.7 78.4 86.1 GTA* [25]----51.1 67.6 74.8 81.6 Graph2SMILES[29]----52.9 66.5 70.0 72.9 Retroformer*[34]64.0 82.5 86.7 90.2 53.2 71.1 76.6 82.1 RetroBridge[13]----50.8 74.1 80.6 85.6 Ours*66.2 86.9 91.9 95.1 53.6 77.6 84.6 90.3

Table 2: Top-k 𝑘 k italic_k accuracy for retrosynthesis prediction on USPTO-MIT. Best performance of each model type is in bold.

Model Type Model Top-k 𝑘 k italic_k accuracy (%) 1 3 5 10 template-based NeuralSym[24]47.8 67.6 74.1 80.2 LocalRetro[5]54.1 73.7 79.4 84.4 template-free Liu’s Seq2seq[19]46.9 61.6 66.3 70.8 AutoSynRoute[18]54.1 71.8 76.9 81.8 RetroTRAE[30]58.3--- Ours 59.9 76.9 82.0 86.4

Table 3: Top-k 𝑘 k italic_k accuracy for retrosynthesis prediction on USPTO-FULL. * indicates the model with SMILES augmentation. Best performance of each model type is in bold.

Model Type Model Top-k 𝑘 k italic_k accuracy (%) 1 3 5 10 template-based RetroSim[7]32.8--56.1 NeuralSym[24]35.8--60.8 GLN[9]39.3--63.7 LocalRetro[5]39.1 53.3 58.4 63.7 semi-template-based RetroPrime*[35]44.1 59.1 62.8 68.5 template-free MEGAN[21]33.6--63.9 Aug. Transformer*[28]46.2--73.3 Graph2SMILES[29]45.7--63.4 GTA*[25]46.6--70.4 Ours*50.4 66.1 71.3 76.2

Table 4: Top-k 𝑘 k italic_k SMILES validity for retrosynthesis prediction on USPTO-50K with reaction class unknown.

Model Top-k 𝑘 k italic_k validity (%) 1 3 5 10 Transformer[31]97.2 87.9 82.4 73.1 Graph2SMILES[29]99.4 90.9 84.9 74.9 RetroPrime[35]98.9 98.2 97.1 92.5 Retroformer[34]99.2 98.5 97.4 96.7 Ours 99.8 99.7 99.3 98.2

Table 5: Top-k 𝑘 k italic_k round-trip accuracy for retrosynthesis prediction on USPTO-50K with reaction class unknown.

Model Top-k 𝑘 k italic_k round-trip acc. (%) 1 3 5 10 GraphRetro[27]80.5 73.3 68.3 59.3 Transformer[31]71.9 54.7 46.2 35.6 Graph2SMILES[29]76.7 56.0 46.4 34.9 RetroPrime[35]79.6 59.6 50.3 40.4 Retroformer[34]78.9 72.0 67.1 57.2 Ours 80.9 74.0 69.0 60.2

Top-k 𝑘 k italic_k Accuracy. We compare our model with existing single-step retrosynthesis prediction in terms of top-k 𝑘 k italic_k accuracy on all the datasets. The results are summarized in Table1, Table2 and Table3. On the USPTO-50K dataset, our model achieves a top-3 accuracy of 77.6%, top-5 accuracy of 84.6% and top-10 accuracy of 90.3% under the reaction class unknown setting, surpassing the SOTA template-free method by 3.5%, 4.0% and 4.7% respectively. And with reaction class given on USPTO-50K dataset, our model achieves the top-1 accuracy of 66.2%, top-5 accuracy of 91.9% and top-10 accuracy of 95.1%, which exceeds the SOTA template-free method by 2.2%, 4.4% and 4.9% respectively. Moreover, our model outperforms all the semi-template-based methods with a noticeable margin. It’s also encouraging to see that our method, as a template-free method, achieves competitive or even superior performance against the powerful template-based methods such as LocalRetro under both settings of USPTO-50K dataset. On UPSTO-MIT dataset, our model achieves the top-1 accuracy of 59.9% and top-10 accuracy of 86.4%, which even outperforms the existing template-based SOTA method LocalRetro significantly. Additionally, our models achieved a top-1 accuracy of 50.4% on the USPTO-FULL dataset, which exceed that of the current SOTA model GTA by 3.8%. These findings sufficiently demonstrate the effectiveness of our method. The contribution of each proposed module will be further validated in Sec.3.3.

It is noteworthy that while template-based approaches have achieved remarkable performance on the USPTO-50K dataset, their reliance on external template libraries has emerged as a constraint as datasets grow in scale and complexity. This dependency leads to a substantial degradation in model performance. In contrast, template-free methods have demonstrated superior versatility and adaptability, qualities that render them especially appropriate for managing large-scale and intricate datasets.

Top-k 𝑘 k italic_k SMILES Validity. We use vanilla Transformer, RetroPrime, Retroformer and Graph2SMILES as robust baselines to compare the validity of SMILES in our study. SMILES generation models for retrosynthesis tasks often encounter challenges with maintaining SMILES validity. We do not take the methods based on templates or molecule editing as baseline here because the validity of generated SMILES can be guaranteed by the templates or chemical toolkits. Unlike graph-based models, SMILES-based methods need to ensure that the generated content adheres to the parsing rules of SMILES, without leveraging chemical tools such as RDKit. Consequently, SMILES-based approaches are more susceptible to generating invalid SMILES compared to graph-based models. As shown in Table4, our model demonstrates superior top-1 and top-5 molecule validity compared to other models, even without employing canonical SMILES as our training objective. This improvement could be attributed to the proposed two-stage training strategy and data augmentation, which assist the model in capturing various SMILES patterns effectively.

Top-k 𝑘 k italic_k Round-Trip Accuracy. To assess the accuracy of our predicted synthesis plans, we utilize the Molecule Transformer[23] as the benchmark reaction prediction model and calculate the top-k 𝑘 k italic_k round-trip accuracy. We take RetroPrime, Retroformer and Graph2SMILES as our strongv SMILES-based baselines. We also use take graph-based method GraphRetro into comparison. The results are presented in Table5. The results clearly indicate that our model outperforms all SMILES-based baselines by a considerable extent and even exceeds the well-established graph-based method, GraphRetro. This underscores the efficacy of our unsupervised SMILES alignment mechanism, which enables the model to efficiently leverage substructures from product molecules to construct reactants. This mechanism allows the model to focus more intently on learning reaction mechanisms, thereby yielding more plausible predictive outcomes. In summary, our model has exhibited a robust capacity for generating coherent and efficacious synthesis pathways, specifically tailored for advanced downstream applications such as multi-step retrosynthesis planning.

3.3 Ablation Study

Table 6: Effects of different modules on retrosynthesis performance in reaction class unknown setting of USPTO-50K dataset. Best performance is in bold.

Method Top-k 𝑘 k italic_k acc. (%)Top-k 𝑘 k italic_k round-trip acc. (%)Top-k 𝑘 k italic_k validity (%) 1 3 5 10 1 3 5 10 1 3 5 10 UAlign (Full Version)53.6 77.6 84.6 90.3 80.9 74.0 69.0 60.2 99.8 99.7 99.3 98.2 −-- Two Stage Training 53.1 75.6 82.8 88.8 80.4 72.3 66.9 57.6 99.2 97.7 96.1 92.2 −-- Data Augmentation 48.5 71.8 79.2 85.9 78.2 70.5 65.1 56.0 99.1 98.4 97.7 95.6 −-- SMILES Alignment 45.5 65.2 71.4 77.5 70.3 57.9 51.0 40.1 97.8 95.3 93.6 89.2 Transformer[31]42.4 58.6 63.8 67.7 71.9 54.7 46.2 35.6 97.2 87.9 82.4 73.1

We investigate the effects of different components in our proposed pipelines. The result is summarized in Table6.

Two Stage Training. We eliminate the initial training phase and directly train the model for the retrosynthesis prediction task. As indicated in Table6, the two-stage training strategy has consistently led to enhancements in all evaluated metrics. This observation implies that the two-stage training strategy effectively enables the model to adeptly learn the intricacies of molecular SMILES representations, thereby yielding higher quality and more plausible retrosynthetic analysis outcomes.

Data Augmentation. We remove the data augmentation during the second training stage, which means training solely using the DFS order that can generate canonical SMILES. Table6 demonstrates a significant decline in model performance across all metrics. This clearly demonstrates that our data augmentation significantly improves the model’s performance.

SMILES Alignment. In the training process, we remove all operations related to SMILES alignment. This includes the removal of the position encoding in Eq.3, where the features H 𝐻 H italic_H directly served as the input memory for the Transformer decoder. Since we eliminate the input related to the DFS order, the model was no longer trained using order-preserving reactants SMILES as the target but instead switched to canonical SMILES for product. Additionally, in this set of experiments, we remove the first training stage, which aligns the graph and SMILES modalities as the model architecture changes. The results are reported in Table6, and they show a significant decline in performance compared to our full version, indicating that the proposed SMILES alignment algorithm is crucial for achieving excellent performance.

It is worth noting that even without data augmentation, two-stage training and SMILES alignment, our model still outperforms the vanilla Transformer by a large margin in terms of all metrics reported in the last line of Table6. This indicates that graph-based molecular representation learning still has advantages over SMILES-based approaches, and our proposed EGAT+ can extract effective molecular representations for downstream usage.

3.4 Case Study (Visualization of cross-attention mechanism in transformer with UAling)

Image 3: Refer to caption

Figure 3: Visualization of cross-attention over order-aware node features and the predicted tokens. The number on the y-axis is the map number of atoms in the product. The reactants atoms that not appear in product is colored red in the x-axis. ∘\circ∘ represents the end token.

We randomly select a case from the dataset and showcase the cross-attention map in Fig.3. The cross-attention map indicates the correlation between reactant tokens and nodes in the input product graph. This map is obtained by averaging the attention coefficient from each attention head. From the figure, it is evident that the predicted tokens successfully locate their corresponding atoms in the product, which contributes to the accurate prediction. The SMILES alignment can also be observed to assist the model in correctly identifying the reaction center. In accordance with the figure, the bond between atom C:11 and N:9 breaks during the transformation into reactants. Our model effectively notices this occurrence and focus the attention of token t^7 subscript^𝑡 7\hat{t}{7}over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT on the reaction centers C:11 and N:9. This strategic focusing successfully guides the completion of the reactants, ensuring that the leaving group is correctly attached to the appropriate atoms. Additionally, we note that the attention coefficient at token t^14 subscript^𝑡 14\hat{t}{14}over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT 14 end_POSTSUBSCRIPT is concentrated on atoms C:1 and N:9, which are the first atoms of each reactant molecule according to the given DFS order. This further indicates that our model is able to correctly identify the sites where the reaction occurs and accurately cleave the chemical bonds. Moreover, the attention of newly generated structures (i.e., tokens t^7 subscript^𝑡 7\hat{t}{7}over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT to t^13 subscript^𝑡 13\hat{t}{13}over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT 13 end_POSTSUBSCRIPT) is directed towards atoms C:1, C:11, and O:2, which correspond to the specific synthon they will attach to. This demonstrates that our model is able to generate appropriate functional groups based on the molecular structure information to form the reactants. All the aforementioned results illustrate that our proposed SMILES alignment method assists the model in comprehending molecular structural information and helps it to focus on learning chemical rules.

To further investigate the impact of the proposed SMILES alignment mechanism on model training, we visualize the cross-attention coefficients of different Transformer decoder layers. The visualization is provided in Supplementary Fig. 1 of Supplementary Information. From Supplementary Fig. 1, we can observe significant variations in the cross-attention across different layers. Additionally, the establishment of correspondence does not occur exclusively at certain layers, such as the first or last layer. This suggests that directly imposing supervised signals on the cross attention coefficient[34, 25] for SMILES alignment is not a wise approach, whether applied to all layers or only the last layer. It also verifies our statement in Sec.2.3 that using unsupervised methods for SMILES alignment does not affect the diversity of attention maps and further has a negative impact on model training and performance. This is why our unsupervised SMILES alignment mechanism achieves better results than supervised SMILES alignment.

3.5 Case Study (Multi-step Retrosynthetic Pathway Planning)

Image 4: Refer to caption

Figure 4: Multistep retrosynthesis predictions by our method. (a) Mitapivat (b) Pacritinib citrate (c) Daprodust. The reaction centers and leaving groups are highlighted in different colors. The pathway pf molecule (a) and (b) come from literature, while the last one is verified by chemical experts.

To explore the suitability of our model for multi-step retrosynthetic pathway planning, we select three distinct molecules as targets for synthetic route design, and the synthesis routes are obtained through iterative calls to our UAlign model, which is trained with the USPTO-FULL dataset. The predicted pathways are summarized in Fig.4.

The first case study involves Mitapivat, a compound approved for the treatment of hereditary hemolytic anemias in February 2022[1]. Our model successfully predicted the five-step synthetic route reported in[3], with each step consistently ranked within the top-2 predictions. The first step entails an amide coupling reaction, which our model placed at rank 2, yielding the reactants 1-(cyclopropylmethyl)piperazine (compound 2) and 4-(quinoline-8-sulfonamido)benzoic acid (compound 3). Notably, at the initial step, our model also proposed an alternative synthesis method utilizing the Borch Reductive Amination, which was ranked at the first and is consistent with the synthetic route delineated bySaunders et al. Subsequently, for the synthesis of 4-(quinoline-8-sulfonamido)benzoic acid, the model precisely executed a functional group protection strategy during the second step and accurately anticipated the subsequent formation of the sulfonamide, effectively deconstructing the target molecule into readily available precursors. For the synthesis of 1-(cyclopropylmethyl)piperazine, the model strategically protected the amine functional group with a tert-butyloxycarbonyl moiety at the outset and, in the ultimate step, prognosticated the N-alkylation reaction with a top-ranking accuracy. This example illustrates our model’s capability to uncover diverse reaction centers in molecular retrosynthetic design and to generate plausible reactant combinations based on these insights.

The second case under scrutiny is Pacritinib, an orally bioavailable and isoform selective JAK-2 inhibitor for the treatment of patients with myelofibrosis, which received FDA approval on February 28, 2022[41]. As shown in Figure4(b), our model successfully delineates a eight-step synthesis, as described in the literature[4], tracing the synthetic pathway from commercially available 5-nitrosalicylaldehyde and 2,6-dichloropyrimidine to the final product. The initial step of the reverse synthesis is olefin metathesis, ranking the first in order of likelihood, followed by another rank-2 aromatic substitution of 4-(3-((allyloxy)methyl)phenyl)-2-chloropyrimidine (compound 12) and 3-((allyloxy)methyl)-4-(2-(pyrrolidin-1-yl)ethoxy)aniline (compound 13). Subsequently, synthesis of 4-(3-((allyloxy)methyl)phenyl)-2-chloropyrimidine was correctly identified via continuous allyl substitution and Suzuki cross-coupling reaction as the top and the second choices. The reverse synthesis of 3-((allyloxy)methyl)-4-(2-(pyrrolidin-1-yl)ethoxy)aniline was reduction of the nitro group, followed by another allyl substitution. In the final step, the model’s highest probability prediction was reduction of the aldehyde group, followed by a nucleophilic substitution. Despite the synthesis route involving a considerable number of steps and encompassing a variety of reaction types, our model successfully and accurately predicted each step within the top-2 choices. This accomplishment signifies the robustness and efficacy of our model in the context of retrosynthetic analysis.

The final case is Daprodustat, the first oral hypoxy-inducing factor prolyl hydroxylase inhibitor (HIF-PHI) for the treatment of renal anemia caused by chronic kidney disease (CKD)[11]. This novel compound received approval for market release from the FDA on the 1st of February, 2023[41]. Our model predicted the three-step synthetic route. The first step reports the hydrolysis of ester at rank 3, which is aligned with the route provided byDuffy et al. Although next two steps provided by our method do not exist in the literature, there are all explainable. The synthesis of ethyl (1,3-dicyclohexyl-2,4,6-trioxohexahydropyrimidine-5-carbonyl)glycinate (compound 21) was identified via dehydration condensation of 1,3-dicyclohexyl-2,4,6-trioxohexahydropyrimidine-5-carboxylic acid (compound 23) and ethyl glycinate (compound 22) as the top choice, which avoided using toxic ethyl isocyanatoacetate reported in literature. In the final step, the model’s highest probability prediction was amidation of ester, resulting in cost-effective and readily accessible starting materials. This case demonstrates the robust extrapolative capacity of our model, highlighting its potential to generate synthetic routes that surpass those documented in the literature.

4 Limitations

This work does not integrate much domain knowledge related to chemical reaction mechanisms in its design, which to some extent, compromises its interpretability. Similarly to most template-free methods, our work also faces challenges in generating diverse results. These aspects will be left for exploration in future works.

5 Conclusion

We present UAlign, a novel graph-to-sequence pipeline that achieves state-of-the-art performance in the field of template-free methods. Our approach outperforms existing template-free and semi-template-based methods, while achieving comparable results to template-based methods. By utilizing a specially-designed graph neural network as the encoder, our model effectively leverages chemical and structural information from molecule graphs, resulting in powerful embedding for the decoder. Additionally, Our proposed unsupervised SMILES alignment mechanism facilitates the reuse of shared substructures between reactants and products for generation, allowing the model to prioritize chemical knowledge even without complex data annotations. This significantly enhances the performance of the pipeline. In future work, we plan to further explore multi-step retrosynthesis planning using our UAlign as the single-step retrosynthesis prediction backbone.

\bmhead

Supplementary information

We provide the details about our metrics, data preprocess and more visualization results in another pdf file as Supplementary Information.

Declarations

5.1 Funding

This work was supported by the Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), the SJTU AI for Science platform, and the Fundamental Research Funds for the Central Universities.

5.2 Competing interests

The authors declare no competing interests.

5.3 Data availability

5.4 Code availablity

5.5 Author contribution

K.Z. was responsible for the code implementation, algorithm design, and the majority of the manuscript writing. X.Z. conducted the Ablation Study. Y.Z. and B.Y. managed multi-step Retrosynthetic Pathway Planning. F.N. handled all visualizations. The remaining authors contributed to the polishing of the article. Y.X., Y.J. and X.Y. supervised the research.

5.6 Ethics approval and consent to participate

Not applicable

5.7 Materials availability

Not applicable

5.8 Consent for publication

Not applicable

References

  • \bibcommenthead
  • Al-Samkari and van Beers [2021] Al-Samkari H, van Beers EJ (2021) Mitapivat, a novel pyruvate kinase activator, for the treatment of hereditary hemolytic anemias. Therapeutic Advances in Hematology 12:20406207211066070. 10.1177/20406207211066070, URL https://doi.org/10.1177/20406207211066070, pMID: 34987744, https://doi.org/10.1177/20406207211066070
  • Ba et al [2016] Ba JL, Kiros JR, Hinton GE (2016) Layer normalization
  • Benedetto Tiz et al [2022] Benedetto Tiz D, Bagnoli L, Rosati O, et al (2022) Fda-approved small molecules in 2022: Clinical uses and their synthesis. Pharmaceutics 14(11):2538
  • Chang et al [2015] Chang H, Yajun G, Jiajuan P, et al (2015) Synthesis of pacritinib hydrochloride. Chinese Journal of Pharmaceuticals 46(12). 10.16522/j.cnki.cjph.2015.12.001
  • Chen and Jung [2021] Chen S, Jung Y (2021) Deep retrosynthetic reaction prediction using local reactivity and global attention. JACS Au 1(10):1612–1620
  • Chen et al [2023] Chen Z, Ayinde OR, Fuchs JR, et al (2023) G 2 retro as a two-step graph generative models for retrosynthesis prediction. Communications Chemistry 6. 10.1038/s42004-023-00897-3
  • Coley et al [2017] Coley CW, Rogers L, Green WH, et al (2017) Computer-assisted retrosynthesis based on molecular similarity. ACS central science 3(12):1237–1245
  • Corey [1991] Corey EJ (1991) The logic of chemical synthesis: multistep synthesis of complex carbogenic molecules (nobel lecture). Angewandte Chemie International Edition in English 30(5):455–465
  • Dai et al [2019] Dai H, Li C, Coley C, et al (2019) Retrosynthesis prediction with conditional graph logic network. Advances in Neural Information Processing Systems 32
  • Duffy et al [2007] Duffy K, Fitch D, Jin J, et al (2007) Preparation of n-substituted pyrimidine-trione amino acid derivatives as prolyl hydroxylase inhibitors. WO2007150011A2
  • Hara et al [2015] Hara K, Takahashi N, Wakamatsu A, et al (2015) Pharmacokinetics, pharmacodynamics and safety of single, oral doses of gsk1278863, a novel hif-prolyl hydroxylase inhibitor, in healthy japanese and caucasian subjects. Drug Metabolism and Pharmacokinetics 30(6):410–418. https://doi.org/10.1016/j.dmpk.2015.08.004, URL https://www.sciencedirect.com/science/article/pii/S1347436715000518
  • Hu et al [2019] Hu W, Liu B, Gomes J, et al (2019) Strategies for pre-training graph neural networks. In: International Conference on Learning Representations
  • Igashov et al [2023] Igashov I, Schneuing A, Segler M, et al (2023) Retrobridge: Modeling retrosynthesis with markov bridges. In: The Twelfth International Conference on Learning Representations
  • Jin et al [2017] Jin W, Coley C, Barzilay R, et al (2017) Predicting organic reaction outcomes with weisfeiler-lehman network. Advances in neural information processing systems 30
  • Kim et al [2021] Kim E, Lee D, Kwon Y, et al (2021) Valid, plausible, and diverse retrosynthesis using tied two-way transformers with latent variables. Journal of Chemical Information and Modeling 61(1):123–133
  • Landrum et al [2013] Landrum G, et al (2013) Rdkit: A software suite for cheminformatics, computational chemistry, and predictive modeling. Greg Landrum 8:31
  • Lee et al [2019] Lee J, Lee Y, Kim J, et al (2019) Set transformer: A framework for attention-based permutation-invariant neural networks. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol 97. PMLR, pp 3744–3753, URL https://proceedings.mlr.press/v97/lee19d.html
  • Lin et al [2020] Lin K, Xu Y, Pei J, et al (2020) Automatic retrosynthetic route planning using template-free models. Chemical science 11(12):3355–3364
  • Liu et al [2017] Liu B, Ramsundar B, Kawthekar P, et al (2017) Retrosynthetic reaction prediction using neural sequence-to-sequence models. ACS central science 3(10):1103–1113
  • Rong et al [2020] Rong Y, Bian Y, Xu T, et al (2020) Self-supervised graph transformer on large-scale molecular data. In: Larochelle H, Ranzato M, Hadsell R, et al (eds) Advances in Neural Information Processing Systems, vol 33. Curran Associates, Inc., pp 12559–12571, URL https://proceedings.neurips.cc/paper_files/paper/2020/file/94aef38441efa3380a3bed3faf1f9d5d-Paper.pdf
  • Sacha et al [2021] Sacha M, Błaz M, Byrski P, et al (2021) Molecule edit graph attention network: modeling chemical reactions as sequences of graph edits. Journal of Chemical Information and Modeling 61(7):3273–3284
  • Saunders et al [2010] Saunders JO, Salituro FG, Yan S (2010) Preparation of aroylpiperazines and related compounds as pyruvate kinase m2 modulators useful in treatment of cancer. US2010331307A1, 30 Dec 2010
  • Schwaller et al [2019] Schwaller P, Laino T, Gaudin T, et al (2019) Molecular transformer: a model for uncertainty-calibrated chemical reaction prediction. ACS central science 5(9):1572–1583
  • Segler and Waller [2017] Segler MH, Waller MP (2017) Neural-symbolic machine learning for retrosynthesis and reaction prediction. Chemistry–A European Journal 23(25):5966–5971
  • Seo et al [2021] Seo SW, Song YY, Yang JY, et al (2021) Gta: Graph truncated attention for retrosynthesis. Proceedings of the AAAI Conference on Artificial Intelligence 35(1):531–539. 10.1609/aaai.v35i1.16131, URL https://ojs.aaai.org/index.php/AAAI/article/view/16131
  • Shi et al [2020] Shi C, Xu M, Guo H, et al (2020) A graph to graphs framework for retrosynthesis prediction. In: International conference on machine learning, PMLR, pp 8818–8827
  • Somnath et al [2021] Somnath VR, Bunne C, Coley C, et al (2021) Learning graph models for retrosynthesis prediction. Advances in Neural Information Processing Systems 34:9405–9415
  • Tetko et al [2020] Tetko IV, Karpov P, Van Deursen R, et al (2020) State-of-the-art augmented nlp transformer models for direct and single-step retrosynthesis. Nature communications 11(1):5575
  • Tu and Coley [2022] Tu Z, Coley CW (2022) Permutation invariant graph-to-sequence model for template-free retrosynthesis and reaction prediction. Journal of chemical information and modeling 62(15):3503–3513
  • Ucak et al [2022] Ucak UV, Ashyrmamatov I, Ko J, et al (2022) Retrosynthetic reaction pathway prediction through neural machine translation of atomic environments. Nature communications 13(1):1186
  • Vaswani et al [2017] Vaswani A, Shazeer N, Parmar N, et al (2017) Attention is all you need. Advances in neural information processing systems 30
  • Veličković et al [2018] Veličković P, Cucurull G, Casanova A, et al (2018) Graph attention networks. In: International Conference on Learning Representations
  • Vijayakumar et al [2016] Vijayakumar AK, Cogswell M, Selvaraju RR, et al (2016) Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:161002424
  • Wan et al [2022] Wan Y, Hsieh CY, Liao B, et al (2022) Retroformer: Pushing the limits of end-to-end retrosynthesis transformer. In: International Conference on Machine Learning, PMLR, pp 22475–22490
  • Wang et al [2021] Wang X, Li Y, Qiu J, et al (2021) Retroprime: A diverse, plausible and transformer-based method for single-step retrosynthesis predictions. Chemical Engineering Journal 420. 10.1016/j.cej.2021.129845
  • Wu et al [2022] Wu Q, Zhao W, Li Z, et al (2022) Nodeformer: A scalable graph structure learning transformer for node classification. Advances in Neural Information Processing Systems 35:27387–27401
  • Xie et al [2023] Xie S, Yan R, Guo J, et al (2023) Retrosynthesis prediction with local template retrieval. Proceedings of the AAAI Conference on Artificial Intelligence 37(4):5330–5338. 10.1609/aaai.v37i4.25664, URL https://ojs.aaai.org/index.php/AAAI/article/view/25664
  • Yan et al [2020] Yan C, Ding Q, Zhao P, et al (2020) Retroxpert: Decompose retrosynthesis prediction like a chemist. Advances in Neural Information Processing Systems 33:11248–11258
  • Yang et al [2023] Yang N, Zeng K, Wu Q, et al (2023) Molerec: Combinatorial drug recommendation with substructure-aware molecular representation learning. In: Proceedings of the ACM Web Conference 2023, pp 4075–4085
  • Yao et al [2024] Yao L, Guo W, Wang Z, et al (2024) Node-aligned graph-to-graph: Elevating template-free deep learning approaches in single-step retrosynthesis. JACS Au
  • Zhang et al [2023] Zhang JY, Wang YT, Sun L, et al (2023) Synthesis and clinical application of new drugs approved by fda in 2022. Molecular Biomedicine 4(1):26
  • Zheng et al [2019] Zheng S, Rao J, Zhang Z, et al (2019) Predicting retrosynthetic reactions using self-corrected transformer neural networks. Journal of chemical information and modeling 60(1):47–55
  • Zhong et al [2022] Zhong Z, Song J, Feng Z, et al (2022) Root-aligned smiles: a tight representation for chemical reaction prediction. Chemical Science 13(31):9023–9034

Xet Storage Details

Size:
81.2 kB
·
Xet hash:
239480b2864b9508df65f9336ac2502c45f30f00e2b82161e90415e35c456ca0

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.