Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
3 Research products, page 1 of 1

  • Research software
  • 2022-2022
  • Open Source
  • Software
  • Digital Humanities and Cultural Heritage

Date (most recent)
arrow_drop_down
  • Research software . 2022
    Open Source Shell
    Authors: 
    Trieu, Hai-Long;
    Publisher: bio.tools

    BioVAE is a pre-trained latent variable language model for biomedical text mining. Large scale pre-trained language models (PLMs) have advanced state-of-the-art (SOTA) performance on various biomedical text mining tasks. The power of such PLMs can be combined with the advantages of deep generative models. These are examples of these combinations. However, they are trained only on general domain text, and biomedical models are still missing. In this work, we describe BioVAE, the first large scale pre-trained latent variable language model for the biomedical domain, which uses the OPTIMUS framework to train on large volumes of biomedical text. The model shows SOTA performance on several biomedical text mining tasks when compared to existing publicly available biomedical PLMs.

  • Research software . 2022
    Open Source
    Authors: 
    Zhu, Xiaolei;
    Publisher: bio.tools

    BERT-Kcr is a tool for prediction of lysine crotonylation sites by a transfer learning method with pre-trained BERT models.

  • Research software . 2022
    Open Source Python
    Authors: 
    Elnaggar, Ahmed; Heinzinger, Michael; Dallago, Christian; Wang, Yu;
    Publisher: bio.tools

    ProtTrans is providing state of the art pre-trained models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using various Transformers Models.

Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
3 Research products, page 1 of 1
  • Research software . 2022
    Open Source Shell
    Authors: 
    Trieu, Hai-Long;
    Publisher: bio.tools

    BioVAE is a pre-trained latent variable language model for biomedical text mining. Large scale pre-trained language models (PLMs) have advanced state-of-the-art (SOTA) performance on various biomedical text mining tasks. The power of such PLMs can be combined with the advantages of deep generative models. These are examples of these combinations. However, they are trained only on general domain text, and biomedical models are still missing. In this work, we describe BioVAE, the first large scale pre-trained latent variable language model for the biomedical domain, which uses the OPTIMUS framework to train on large volumes of biomedical text. The model shows SOTA performance on several biomedical text mining tasks when compared to existing publicly available biomedical PLMs.

  • Research software . 2022
    Open Source
    Authors: 
    Zhu, Xiaolei;
    Publisher: bio.tools

    BERT-Kcr is a tool for prediction of lysine crotonylation sites by a transfer learning method with pre-trained BERT models.

  • Research software . 2022
    Open Source Python
    Authors: 
    Elnaggar, Ahmed; Heinzinger, Michael; Dallago, Christian; Wang, Yu;
    Publisher: bio.tools

    ProtTrans is providing state of the art pre-trained models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using various Transformers Models.

Send a message
How can we help?
We usually respond in a few hours.