Coder Social home page Coder Social logo

DNA Sequences Embedding about dnabert HOT 16 CLOSED

jerryji1993 avatar jerryji1993 commented on May 29, 2024 2
DNA Sequences Embedding

from dnabert.

Comments (16)

Zhihan1996 avatar Zhihan1996 commented on May 29, 2024 13

Hi,

I am sorry that I was wrong in the last response. The 2-dimensional logits here is essentially the classification results of the given sequence, where each dimension stands for the probability of this sequence belongs to each class. The embedding for each sequence should be a 768-dimensional vector. You can achieve it by

import torch
from transformers import BertModel, BertConfig, DNATokenizer

dir_to_pretrained_model = "xxx/xxx"

config = BertConfig.from_pretrained('https://raw.githubusercontent.com/jerryji1993/DNABERT/master/src/transformers/dnabert-config/bert-config-6/config.json')
tokenizer = DNATokenizer.from_pretrained('dna6')
model = BertModel.from_pretrained(dir_to_pretrained_model, config=config)

sequence = "AATCTA ATCTAG TCTAGC CTAGCA"
model_input = tokenizer.encode_plus(sequence, add_special_tokens=True, max_length=512)["input_ids"]
model_input = torch.tensor(model_input, dtype=torch.long)
model_input = model_input.unsqueeze(0)   # to generate a fake batch with batch size one

output = model(model_input)

Here the output[1] is the embedding of the input sequence.

For the current version, if you have sequences longer than 512, then you need either cut it to 512 or split it into multiple pieces of 512 lengths and concatenate their embedding together.

from dnabert.

iamysk avatar iamysk commented on May 29, 2024 2

Hi,

How do I get embeddings of multiple sequences at once? I tried with a list of sequences, but the output is always a 1x768 vector.

Thanks.

from dnabert.

aliakay avatar aliakay commented on May 29, 2024 1

Hello, I just want to import the DNATokenizer from the transformers, but there is wrong with 'ImportError: cannot import name 'DNATokenizer'', can you help me to solve this? Google doesn't tell me anything about it...

You should clone it in right directory, because DNATokenizer is inside of the DNABERT folder.

Try this,

!git clone https://github.com/jerryji1993/DNABERT
%cd DNABERT
!python3 -m pip install --editable .
%cd examples
!python3 -m pip install -r requirements.txt

and run import in this directory.
cd "DNABERT/examples"
then it should import the DNATokenizer.

from dnabert.

ChengkuiZhao avatar ChengkuiZhao commented on May 29, 2024 1

Hi,
I am sorry that I was wrong in the last response. The 2-dimensional logits here is essentially the classification results of the given sequence, where each dimension stands for the probability of this sequence belongs to each class. The embedding for each sequence should be a 768-dimensional vector. You can achieve it by

import torch
from transformers import BertModel, BertConfig, DNATokenizer

dir_to_pretrained_model = "xxx/xxx"

config = BertConfig.from_pretrained('https://raw.githubusercontent.com/jerryji1993/DNABERT/master/src/transformers/dnabert-config/bert-config-6/config.json')
tokenizer = DNATokenizer.from_pretrained('dna6')
model = BertModel.from_pretrained(dir_to_pretrained_model, config=config)

sequence = "AATCTA ATCTAG TCTAGC CTAGCA"
model_input = tokenizer.encode_plus(sequence, add_special_tokens=True, max_length=512)["input_ids"]
model_input = torch.tensor(model_input, dtype=torch.long)
model_input = model_input.unsqueeze(0)   # to generate a fake batch with batch size one

output = model(model_input)

Here the output[1] is the embedding of the input sequence.
For the current version, if you have sequences longer than 512, then you need either cut it to 512 or split it into multiple pieces of 512 lengths and concatenate their embedding together.

Dear Zihian,

I would like to use DNABert to get short dna sequence representation in order to keep sequencing relationship instead of using one hot encoder method and I want to put this values to another model for my desire, is it good use directly embedding representation which is output[1] or attention score would be work also for representation?

I did some research on the output of this model, the output[0] is the last_hidden_state (https://huggingface.co/docs/transformers/main_classes/output), I saw people used output[0][ : , 0 , : ] which means the 768 dimension vector for the 'CLS' in the last hidden layer for the following model, and that works for me. I think the attention score is not for the output representation.

from dnabert.

Zhihan1996 avatar Zhihan1996 commented on May 29, 2024

Hi,

Yes, you can do this. Please refer to the line 420 of https://github.com/jerryji1993/DNABERT/blob/master/examples/run_finetune.py. The variable logits stands for the embedding of the DNA sequence. You can directly use it.

from dnabert.

elbasir avatar elbasir commented on May 29, 2024

Hi,
Thanks for your answer. I have checked the embedding using the variable logits and I found out the embedding size is only 2-dimensional vector for each DNA sequence. I have a sequence that's longer than 500 bps, would it be possible to extend the embedding size or you think the current embedding size is enough to represent a long DNA sequence?

from dnabert.

elbasir avatar elbasir commented on May 29, 2024

Thanks a lot!

from dnabert.

maiskovich avatar maiskovich commented on May 29, 2024

Hi,

I am sorry that I was wrong in the last response. The 2-dimensional logits here is essentially the classification results of the given sequence, where each dimension stands for the probability of this sequence belongs to each class. The embedding for each sequence should be a 768-dimensional vector. You can achieve it by

import torch
from transformers import BertModel, BertConfig, DNATokenizer

dir_to_pretrained_model = "xxx/xxx"

config = BertConfig.from_pretrained('https://raw.githubusercontent.com/jerryji1993/DNABERT/master/src/transformers/dnabert-config/bert-config-6/config.json')
tokenizer = DNATokenizer.from_pretrained('dna6')
model = BertModel.from_pretrained(dir_to_pretrained_model, config=config)

sequence = "AATCTA ATCTAG TCTAGC CTAGCA"
model_input = tokenizer.encode_plus(sequence, add_special_tokens=True, max_length=512)["input_ids"]
model_input = torch.tensor(model_input, dtype=torch.long)
model_input = model_input.unsqueeze(0)   # to generate a fake batch with batch size one

output = model(model_input)

Here the output[1] is the embedding of the input sequence.

For the current version, if you have sequences longer than 512, then you need either cut it to 512 or split it into multiple pieces of 512 lengths and concatenate their embedding together.

I tried this code and when running it inside a for loop I was always getting the process killed as it was using too much memory, I needed to put the prediction part inside with torch.no_grad():, It ended looking like this:

from transformers import BertModel, BertConfig, DNATokenizer

dir_to_pretrained_model = "xxx/xxx"

config = BertConfig.from_pretrained('https://raw.githubusercontent.com/jerryji1993/DNABERT/master/src/transformers/dnabert-config/bert-config-6/config.json')
tokenizer = DNATokenizer.from_pretrained('dna6')
model = BertModel.from_pretrained(dir_to_pretrained_model, config=config)

sequence = "AATCTA ATCTAG TCTAGC CTAGCA"
with torch.no_grad():
        model_input = tokenizer.encode_plus(sequence, add_special_tokens=True, max_length=512)["input_ids"]
        model_input = torch.tensor(model_input, dtype=torch.long)
        model_input = model_input.unsqueeze(0)   # to generate a fake batch with batch size one
        
        output = model(model_input)```

from dnabert.

asimokby avatar asimokby commented on May 29, 2024

Hi,

I am sorry that I was wrong in the last response. The 2-dimensional logits here is essentially the classification results of the given sequence, where each dimension stands for the probability of this sequence belongs to each class. The embedding for each sequence should be a 768-dimensional vector. You can achieve it by

import torch
from transformers import BertModel, BertConfig, DNATokenizer

dir_to_pretrained_model = "xxx/xxx"

config = BertConfig.from_pretrained('https://raw.githubusercontent.com/jerryji1993/DNABERT/master/src/transformers/dnabert-config/bert-config-6/config.json')
tokenizer = DNATokenizer.from_pretrained('dna6')
model = BertModel.from_pretrained(dir_to_pretrained_model, config=config)

sequence = "AATCTA ATCTAG TCTAGC CTAGCA"
model_input = tokenizer.encode_plus(sequence, add_special_tokens=True, max_length=512)["input_ids"]
model_input = torch.tensor(model_input, dtype=torch.long)
model_input = model_input.unsqueeze(0)   # to generate a fake batch with batch size one

output = model(model_input)

Here the output[1] is the embedding of the input sequence.

For the current version, if you have sequences longer than 512, then you need either cut it to 512 or split it into multiple pieces of 512 lengths and concatenate their embedding together.

Thank you this is very helpful!

If you run this snippet in a script that lives in the parent directory, the same level as the examples folder, you may run into some problems.

I had to change the following two imports in modeling_albert.py:
from transformers.configuration_albert import AlbertConfig from transformers.modeling_bert import ACT2FN, BertEmbeddings, BertSelfAttention, prune_linear_layer

to the following:

from transformers.models.albert.configuration_albert import AlbertConfig from transformers.models.bert.modeling_bert import ACT2FN, BertEmbeddings, BertSelfAttention, prune_linear_layer

Also, I made a change to the snippet to get it to work. I changed the following import statement:

from transformers import BertModel, BertConfig, DNATokenizer

to

from src.transformers import DNATokenizer from transformers import BertModel, BertConfig

from dnabert.

ChengkuiZhao avatar ChengkuiZhao commented on May 29, 2024

Hello, I just want to import the DNATokenizer from the transformers, but there is wrong with 'ImportError: cannot import name 'DNATokenizer'', can you help me to solve this? Google doesn't tell me anything about it...

from dnabert.

ChengkuiZhao avatar ChengkuiZhao commented on May 29, 2024

Hello, I just want to import the DNATokenizer from the transformers, but there is wrong with 'ImportError: cannot import name 'DNATokenizer'', can you help me to solve this? Google doesn't tell me anything about it...

You should clone it in right directory, because DNATokenizer is inside of the DNABERT folder.

Try this,

!git clone https://github.com/jerryji1993/DNABERT %cd DNABERT !python3 -m pip install --editable . %cd examples !python3 -m pip install -r requirements.txt

and run import in this directory. cd "DNABERT/examples" then it should import the DNATokenizer.

Actually I didn't use the DNATokenizer, and used the BERTTokenizer instead. The code also works, is this two way different and make much difference?
Thank you so much for your reply.

from dnabert.

aliakay avatar aliakay commented on May 29, 2024

Hello, I just want to import the DNATokenizer from the transformers, but there is wrong with 'ImportError: cannot import name 'DNATokenizer'', can you help me to solve this? Google doesn't tell me anything about it...

You should clone it in right directory, because DNATokenizer is inside of the DNABERT folder.
Try this,
!git clone https://github.com/jerryji1993/DNABERT %cd DNABERT !python3 -m pip install --editable . %cd examples !python3 -m pip install -r requirements.txt
and run import in this directory. cd "DNABERT/examples" then it should import the DNATokenizer.

Actually I didn't use the DNATokenizer, and used the BERTTokenizer instead. The code also works, is this two way different and make much difference? Thank you so much for your reply.

As far as understand,DNATokenizer is specifically trained by DNA Sequences on the other side, BERT Tokenizer is tokenizer for sentences, so you will get different input values when you run both tokenizer which effect your output.

from dnabert.

ChengkuiZhao avatar ChengkuiZhao commented on May 29, 2024

Hello, I just want to import the DNATokenizer from the transformers, but there is wrong with 'ImportError: cannot import name 'DNATokenizer'', can you help me to solve this? Google doesn't tell me anything about it...

You should clone it in right directory, because DNATokenizer is inside of the DNABERT folder.
Try this,
!git clone https://github.com/jerryji1993/DNABERT %cd DNABERT !python3 -m pip install --editable . %cd examples !python3 -m pip install -r requirements.txt
and run import in this directory. cd "DNABERT/examples" then it should import the DNATokenizer.

Actually I didn't use the DNATokenizer, and used the BERTTokenizer instead. The code also works, is this two way different and make much difference? Thank you so much for your reply.

As far as understand,DNATokenizer is specifically trained by DNA Sequences on the other side, BERT Tokenizer is tokenizer for sentences, so you will get different input values when you run both tokenizer which effect your output.

OK, I will try your way to install this requirements. These days, I just write the code on my own and didn't install it, hope the installation will work.

from dnabert.

aliakay avatar aliakay commented on May 29, 2024

Hi,

I am sorry that I was wrong in the last response. The 2-dimensional logits here is essentially the classification results of the given sequence, where each dimension stands for the probability of this sequence belongs to each class. The embedding for each sequence should be a 768-dimensional vector. You can achieve it by

import torch
from transformers import BertModel, BertConfig, DNATokenizer

dir_to_pretrained_model = "xxx/xxx"

config = BertConfig.from_pretrained('https://raw.githubusercontent.com/jerryji1993/DNABERT/master/src/transformers/dnabert-config/bert-config-6/config.json')
tokenizer = DNATokenizer.from_pretrained('dna6')
model = BertModel.from_pretrained(dir_to_pretrained_model, config=config)

sequence = "AATCTA ATCTAG TCTAGC CTAGCA"
model_input = tokenizer.encode_plus(sequence, add_special_tokens=True, max_length=512)["input_ids"]
model_input = torch.tensor(model_input, dtype=torch.long)
model_input = model_input.unsqueeze(0)   # to generate a fake batch with batch size one

output = model(model_input)

Here the output[1] is the embedding of the input sequence.

For the current version, if you have sequences longer than 512, then you need either cut it to 512 or split it into multiple pieces of 512 lengths and concatenate their embedding together.

Dear Zihian,

I would like to use DNABert to get short dna sequence representation in order to keep sequencing relationship instead of using one hot encoder method and I want to put this values to another model for my desire, is it good use directly embedding representation which is output[1] or attention score would be work also for representation?

from dnabert.

WENHUAN22 avatar WENHUAN22 commented on May 29, 2024

Dear authors,

I have a question about obtaining embedding vectors of my data.
For example, the first record in my dataset is "AATCTA ATCTAG TCTAGC CTAGCA".

May I use
model.embeddings.word_embeddings.weight[0]
as the embedding vector of my first sample?
Is there any difference between the method you introduce above (output[1]) and this one?

And after I have embedding vectors I am going to build a classifier on them. It will be nice if you tell me whether this thought is correct or not.

from dnabert.

palset avatar palset commented on May 29, 2024

Hi,

I am sorry that I was wrong in the last response. The 2-dimensional logits here is essentially the classification results of the given sequence, where each dimension stands for the probability of this sequence belongs to each class. The embedding for each sequence should be a 768-dimensional vector. You can achieve it by

import torch
from transformers import BertModel, BertConfig, DNATokenizer

dir_to_pretrained_model = "xxx/xxx"

config = BertConfig.from_pretrained('https://raw.githubusercontent.com/jerryji1993/DNABERT/master/src/transformers/dnabert-config/bert-config-6/config.json')
tokenizer = DNATokenizer.from_pretrained('dna6')
model = BertModel.from_pretrained(dir_to_pretrained_model, config=config)

sequence = "AATCTA ATCTAG TCTAGC CTAGCA"
model_input = tokenizer.encode_plus(sequence, add_special_tokens=True, max_length=512)["input_ids"]
model_input = torch.tensor(model_input, dtype=torch.long)
model_input = model_input.unsqueeze(0)   # to generate a fake batch with batch size one

output = model(model_input)

Here the output[1] is the embedding of the input sequence.

For the current version, if you have sequences longer than 512, then you need either cut it to 512 or split it into multiple pieces of 512 lengths and concatenate their embedding together.

I see that the model_input, which is returned by the tokenizer.encode_plus, does not have padding on the left or right, even though the input size is < 512. If I add padding on the right, the embedding generated by DNABERT changes. So, what according to you is the correct format? Should I manually add padding on the right, or just ignore it?
Thanks for the great work!

from dnabert.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.