【文章阅读 TODO】Transfer learning for drug–target interaction prediction

编程入门 行业动态 更新时间:2024-10-25 13:24:33

【<a href=https://www.elefans.com/category/jswz/34/1771176.html style=文章阅读 TODO】Transfer learning for drug–target interaction prediction"/>

【文章阅读 TODO】Transfer learning for drug–target interaction prediction

Bioinformatics , 2023 Transfer learning for drug–target interaction prediction 本文主要是对迁移学习所使用的三种模式进行学习 

Deep transfer learning is applying transfer learning on deep neural networks. The training phase of deep transfer learning is composed of two stages.

Stage I: A source model is obtained by training the network with a sufficient number of source training data. This is also referred to as the pre-trained source model.
Stage II: The pre-trained source model is used as an initial configuration and re-trained using target training data (which is typically small) to obtain a target model.
Techniques for Stage II are grouped under three modes. Note that the architecture of a deep neural network can be functionally decomposed into roughly two parts: the bottom layer(s) where feature extraction is performed and the upper layer(s) where prediction is performed. Mode 2 and Mode 3 make use of this functional decomposition of the network.

Mode 1—Full fine-tuning: The most common deep transfer learning technique is fine-tuning, which is in fact parameter-based transfer learning. Based on the assumption that the learned parameter values (weights) contain useful knowledge obtained from the source domain, we seek to achieve better performance by moving these parameter values (weights) to the target model. The parameter values acquired from the source model form the initial values of the parameters of the target model. In this way, the weights of the target model do not start with random values but with the converged values of the weights of the pre-trained source model, and the target model is re-trained with a small number of target training data and converges faster as well with a reduced number of training epochs (Fig. 3a).

Mode 2—Feature transformer: The source model is in fact used to form a latent feature space i.e. common to both source data and target data. This is indeed feature-based transfer learning. The feature transformer can be obtained by freezing the bottom layers (which are used for feature extraction) of the pre-trained source model during Stage II; i.e. the weights of the nodes at the bottom layers are not updated during retraining with the target training data. Only the weights of the nodes at the output layer (i.e. the predictor) are modified with the limited number of target training data (Fig. 3b).

Mode 3—Shallow classifier: In Stage II, the output layer (predictor) of the source model is replaced with a shallow classifier. Hence, only the shallow classifier is trained with the target data and the feature vectors for the target data are extracted by the frozen bottom layers of the source model. Mode 3 is similar to Mode 2, except that the extracted feature vectors are given to a shallow classifier instead of the output layer (predictor) of the neural network model (Fig. 3c).

更多推荐

【文章阅读 TODO】Transfer learning for drug–target interaction prediction

本文发布于:2023-11-15 10:30:14,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1598202.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:文章   Transfer   TODO   learning   prediction

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!