Implementation of the 'Gotta be SAFE: a new framework for molecular design' paper
Project description
:safety_vest: SAFE
Sequential Attachment-based Fragment Embedding (SAFE) is a novel molecular line notation that represents molecules as an unordered sequence of fragment blocks to improve molecule design using generative models.
Paper | Docs | 🤗 Model | 🤗 Training Dataset
Overview of SAFE
SAFE is the deep learning molecular representation. It's an encoding leveraging a peculiarity in the decoding schemes of SMILES, to allow representation of molecules as a contiguous sequence of connected fragments. SAFE strings are valid SMILES strings, and thus are able to preserve the same amount of information. The intuitive representation of molecules as an ordered sequence of connected fragments greatly simplifies the following tasks often encountered in molecular design:
- de novo design
- superstructure generation
- scaffold decoration
- motif extension
- linker generation
- scaffold morphing.
The construction of a SAFE strings requires defining a molecular fragmentation algorithm. By default, we use [BRICS], but any other fragmentation algorithm can be used. The image below illustrates the process of building a SAFE string. The resulting string is a valid SMILES that can be read by datamol or RDKit.
News 🚀
💥 2024/01/15 💥
- @IanAWatson has a C++ implementation of SAFE in LillyMol that is quite fast and use a custom fragmentation algorithm. Follow the installation instruction on the repo and checkout the docs of the CLI here: docs/Molecule_Tools/SAFE.md
Installation
You can install safe
using pip:
pip install safe-mol
You can use conda/mamba:
mamba install -c conda-forge safe-mol
Datasets and Models
Type | Name | Infos | Size | Comment |
---|---|---|---|---|
Model | datamol-io/safe-gpt | 87M params | 350M | Default model |
Training Dataset | datamol-io/safe-gpt | 1.1B rows | 250GB | Training dataset |
Drug Benchmark Dataset | datamol-io/safe-drugs | 26 rows | 20 kB | Benchmarking dataset |
Usage
Please refer to the documentation, which contains tutorials for getting started with safe
and detailed descriptions of the functions provided, as well as an example of how to get started with SAFE-GPT.
API
We summarize some key functions provided by the safe
package below.
Function | Description |
---|---|
safe.encode |
Translates a SMILES string into its corresponding SAFE string. |
safe.decode |
Translates a SAFE string into its corresponding SMILES string. The SAFE decoder just augment RDKit's Chem.MolFromSmiles with an optional correction argument to take care of missing hydrogen bonds. |
safe.split |
Tokenizes a SAFE string to build a generative model. |
Examples
Translation between SAFE and SMILES representations
import safe
ibuprofen = "CC(Cc1ccc(cc1)C(C(=O)O)C)C"
# SMILES -> SAFE -> SMILES translation
try:
ibuprofen_sf = safe.encode(ibuprofen) # c12ccc3cc1.C3(C)C(=O)O.CC(C)C2
ibuprofen_smi = safe.decode(ibuprofen_sf, canonical=True) # CC(C)Cc1ccc(C(C)C(=O)O)cc1
except safe.EncoderError:
pass
except safe.DecoderError:
pass
ibuprofen_tokens = list(safe.split(ibuprofen_sf))
Training/Finetuning a (new) model
A command line interface is available to train a new model, please run safe-train --help
. You can also provide an existing checkpoint to continue training or finetune on you own dataset.
For example:
safe-train --config <path to config> \
--model-path <path to model> \
--tokenizer <path to tokenizer> \
--dataset <path to dataset> \
--num_labels 9 \
--torch_compile True \
--optim "adamw_torch" \
--learning_rate 1e-5 \
--prop_loss_coeff 1e-3 \
--gradient_accumulation_steps 1 \
--output_dir "<path to outputdir>" \
--max_steps 5
References
If you use this repository, please cite the following related paper:
@misc{noutahi2023gotta,
title={Gotta be SAFE: A New Framework for Molecular Design},
author={Emmanuel Noutahi and Cristian Gabellini and Michael Craig and Jonathan S. C Lim and Prudencio Tossou},
year={2023},
eprint={2310.10773},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
License
Note that all data and model weights of SAFE are exclusively licensed for research purposes. The accompanying dataset is licensed under CC BY 4.0, which permits solely non-commercial usage. See DATA_LICENSE for details.
This code base is licensed under the Apache-2.0 license. See LICENSE for details.
Development lifecycle
Setup dev environment
mamba create -n safe -f env.yml
mamba activate safe
pip install --no-deps -e .
Tests
You can run tests locally with:
pytest
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file safe-mol-0.1.6.tar.gz
.
File metadata
- Download URL: safe-mol-0.1.6.tar.gz
- Upload date:
- Size: 420.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7ed2828f4bf8da201a8ebd3ba00e3f3c5c949be7cc690b09cbc88af44988101f |
|
MD5 | 4d1263d79934fe0110e26e07130d80e2 |
|
BLAKE2b-256 | 13d8efa29b9953f77007d287d7267ccaf8025bdba5b39dcb12db6909ec8b7014 |
File details
Details for the file safe_mol-0.1.6-py3-none-any.whl
.
File metadata
- Download URL: safe_mol-0.1.6-py3-none-any.whl
- Upload date:
- Size: 51.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5b40ed5dba8dd1a70b8be64e60078fc827f7533a18d8fdd542d081aef3023e0a |
|
MD5 | 2b36d29ccc2baf070ff8cd4f8371a979 |
|
BLAKE2b-256 | ca3b037947a9a8f1b1446bf762b79f6cd489e5350c6a19507c89c13ac636b728 |