How to create a tensor without knowing shape upfront in Keras/TF?











up vote
-1
down vote

favorite












I am currently stuck at a problem. I need to create a temporary variable (of tensor type) for holding values when my custom layer is processing the input. The problem is, the layer gets its inputs in a batch. Batch size is variable. Also, one other dimension of the input is variable which will be provided at the time of running the model.



So, how can I create a tensor of variable dimension every time for a new input. Surely there is a way to do this because built-in layers also take input as a batch whose size is variable and also the shape of output tensor varies as it outputs the whole processed batch. But I don't know how to do it.



Here's the piece of my code:



similarity_matrix = K.zeros(shape=(int_shape(context_vectors)[0], int_shape(context_vectors)[1], int_shape(query_vectors)[1]), dtype='float32')
for i, context_vector in enumerate(modified_context_vectors):
for j, query_vector in enumerate(modified_query_vectors):
concatenated_vector = concatenate_and_multiply(context_vector, query_vector)
result = dot(concatenated_vector, self.kernel)
similarity_matrix[:, i, j] = result
return similarity_matrix


Here, context_vectors and query_vectors are of shape (None, None, 600). The first dimension is batch_size which is same for both. The second dimension is the number of words in context and query and hence will differ for both.










share|improve this question






















  • You can only have one variable size, the batch size. The rest has to be fixed. The other dimension indicate the number of connections to other nodes, if they are unknown, how can you train them properly? Answer: you can't.
    – Matthieu Brucher
    Nov 22 at 14:54












  • You are wrong brother.. We can have variable input size to a model and I have already built one such model. Please reconsider the question and don't give a downvote unnecessarily. Rather follow it up if you don't have an answer.
    – Kadam Parikh
    Nov 22 at 14:59










  • BTW... the question is not dependent on my code snippet. I want to know the way of how to create a tensor of variable shape (take the example of shape (batch_size, 600)). Can you answer this? Don't downvote. Because of you and other such people, we don't get answers here.
    – Kadam Parikh
    Nov 22 at 15:01










  • I'm not your brother, and you cannot have a variable size for the model, the kernel sizes are fixed. If you have a double batch_size, then that's different, you are still feeding the same stuff inside a fixedmodel.
    – Matthieu Brucher
    Nov 22 at 16:08












  • Use something like: batch_size = Y.get_shape()[0] where Y is a tensor that has a batch_size of None. Then it seems like the dynamic shape is properly set up in the tensors that use this for the sizes (at least this worked for my GANs.)
    – Matthieu Brucher
    Nov 22 at 16:15















up vote
-1
down vote

favorite












I am currently stuck at a problem. I need to create a temporary variable (of tensor type) for holding values when my custom layer is processing the input. The problem is, the layer gets its inputs in a batch. Batch size is variable. Also, one other dimension of the input is variable which will be provided at the time of running the model.



So, how can I create a tensor of variable dimension every time for a new input. Surely there is a way to do this because built-in layers also take input as a batch whose size is variable and also the shape of output tensor varies as it outputs the whole processed batch. But I don't know how to do it.



Here's the piece of my code:



similarity_matrix = K.zeros(shape=(int_shape(context_vectors)[0], int_shape(context_vectors)[1], int_shape(query_vectors)[1]), dtype='float32')
for i, context_vector in enumerate(modified_context_vectors):
for j, query_vector in enumerate(modified_query_vectors):
concatenated_vector = concatenate_and_multiply(context_vector, query_vector)
result = dot(concatenated_vector, self.kernel)
similarity_matrix[:, i, j] = result
return similarity_matrix


Here, context_vectors and query_vectors are of shape (None, None, 600). The first dimension is batch_size which is same for both. The second dimension is the number of words in context and query and hence will differ for both.










share|improve this question






















  • You can only have one variable size, the batch size. The rest has to be fixed. The other dimension indicate the number of connections to other nodes, if they are unknown, how can you train them properly? Answer: you can't.
    – Matthieu Brucher
    Nov 22 at 14:54












  • You are wrong brother.. We can have variable input size to a model and I have already built one such model. Please reconsider the question and don't give a downvote unnecessarily. Rather follow it up if you don't have an answer.
    – Kadam Parikh
    Nov 22 at 14:59










  • BTW... the question is not dependent on my code snippet. I want to know the way of how to create a tensor of variable shape (take the example of shape (batch_size, 600)). Can you answer this? Don't downvote. Because of you and other such people, we don't get answers here.
    – Kadam Parikh
    Nov 22 at 15:01










  • I'm not your brother, and you cannot have a variable size for the model, the kernel sizes are fixed. If you have a double batch_size, then that's different, you are still feeding the same stuff inside a fixedmodel.
    – Matthieu Brucher
    Nov 22 at 16:08












  • Use something like: batch_size = Y.get_shape()[0] where Y is a tensor that has a batch_size of None. Then it seems like the dynamic shape is properly set up in the tensors that use this for the sizes (at least this worked for my GANs.)
    – Matthieu Brucher
    Nov 22 at 16:15













up vote
-1
down vote

favorite









up vote
-1
down vote

favorite











I am currently stuck at a problem. I need to create a temporary variable (of tensor type) for holding values when my custom layer is processing the input. The problem is, the layer gets its inputs in a batch. Batch size is variable. Also, one other dimension of the input is variable which will be provided at the time of running the model.



So, how can I create a tensor of variable dimension every time for a new input. Surely there is a way to do this because built-in layers also take input as a batch whose size is variable and also the shape of output tensor varies as it outputs the whole processed batch. But I don't know how to do it.



Here's the piece of my code:



similarity_matrix = K.zeros(shape=(int_shape(context_vectors)[0], int_shape(context_vectors)[1], int_shape(query_vectors)[1]), dtype='float32')
for i, context_vector in enumerate(modified_context_vectors):
for j, query_vector in enumerate(modified_query_vectors):
concatenated_vector = concatenate_and_multiply(context_vector, query_vector)
result = dot(concatenated_vector, self.kernel)
similarity_matrix[:, i, j] = result
return similarity_matrix


Here, context_vectors and query_vectors are of shape (None, None, 600). The first dimension is batch_size which is same for both. The second dimension is the number of words in context and query and hence will differ for both.










share|improve this question













I am currently stuck at a problem. I need to create a temporary variable (of tensor type) for holding values when my custom layer is processing the input. The problem is, the layer gets its inputs in a batch. Batch size is variable. Also, one other dimension of the input is variable which will be provided at the time of running the model.



So, how can I create a tensor of variable dimension every time for a new input. Surely there is a way to do this because built-in layers also take input as a batch whose size is variable and also the shape of output tensor varies as it outputs the whole processed batch. But I don't know how to do it.



Here's the piece of my code:



similarity_matrix = K.zeros(shape=(int_shape(context_vectors)[0], int_shape(context_vectors)[1], int_shape(query_vectors)[1]), dtype='float32')
for i, context_vector in enumerate(modified_context_vectors):
for j, query_vector in enumerate(modified_query_vectors):
concatenated_vector = concatenate_and_multiply(context_vector, query_vector)
result = dot(concatenated_vector, self.kernel)
similarity_matrix[:, i, j] = result
return similarity_matrix


Here, context_vectors and query_vectors are of shape (None, None, 600). The first dimension is batch_size which is same for both. The second dimension is the number of words in context and query and hence will differ for both.







numpy tensorflow keras neural-network tensor






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 22 at 14:51









Kadam Parikh

73




73












  • You can only have one variable size, the batch size. The rest has to be fixed. The other dimension indicate the number of connections to other nodes, if they are unknown, how can you train them properly? Answer: you can't.
    – Matthieu Brucher
    Nov 22 at 14:54












  • You are wrong brother.. We can have variable input size to a model and I have already built one such model. Please reconsider the question and don't give a downvote unnecessarily. Rather follow it up if you don't have an answer.
    – Kadam Parikh
    Nov 22 at 14:59










  • BTW... the question is not dependent on my code snippet. I want to know the way of how to create a tensor of variable shape (take the example of shape (batch_size, 600)). Can you answer this? Don't downvote. Because of you and other such people, we don't get answers here.
    – Kadam Parikh
    Nov 22 at 15:01










  • I'm not your brother, and you cannot have a variable size for the model, the kernel sizes are fixed. If you have a double batch_size, then that's different, you are still feeding the same stuff inside a fixedmodel.
    – Matthieu Brucher
    Nov 22 at 16:08












  • Use something like: batch_size = Y.get_shape()[0] where Y is a tensor that has a batch_size of None. Then it seems like the dynamic shape is properly set up in the tensors that use this for the sizes (at least this worked for my GANs.)
    – Matthieu Brucher
    Nov 22 at 16:15


















  • You can only have one variable size, the batch size. The rest has to be fixed. The other dimension indicate the number of connections to other nodes, if they are unknown, how can you train them properly? Answer: you can't.
    – Matthieu Brucher
    Nov 22 at 14:54












  • You are wrong brother.. We can have variable input size to a model and I have already built one such model. Please reconsider the question and don't give a downvote unnecessarily. Rather follow it up if you don't have an answer.
    – Kadam Parikh
    Nov 22 at 14:59










  • BTW... the question is not dependent on my code snippet. I want to know the way of how to create a tensor of variable shape (take the example of shape (batch_size, 600)). Can you answer this? Don't downvote. Because of you and other such people, we don't get answers here.
    – Kadam Parikh
    Nov 22 at 15:01










  • I'm not your brother, and you cannot have a variable size for the model, the kernel sizes are fixed. If you have a double batch_size, then that's different, you are still feeding the same stuff inside a fixedmodel.
    – Matthieu Brucher
    Nov 22 at 16:08












  • Use something like: batch_size = Y.get_shape()[0] where Y is a tensor that has a batch_size of None. Then it seems like the dynamic shape is properly set up in the tensors that use this for the sizes (at least this worked for my GANs.)
    – Matthieu Brucher
    Nov 22 at 16:15
















You can only have one variable size, the batch size. The rest has to be fixed. The other dimension indicate the number of connections to other nodes, if they are unknown, how can you train them properly? Answer: you can't.
– Matthieu Brucher
Nov 22 at 14:54






You can only have one variable size, the batch size. The rest has to be fixed. The other dimension indicate the number of connections to other nodes, if they are unknown, how can you train them properly? Answer: you can't.
– Matthieu Brucher
Nov 22 at 14:54














You are wrong brother.. We can have variable input size to a model and I have already built one such model. Please reconsider the question and don't give a downvote unnecessarily. Rather follow it up if you don't have an answer.
– Kadam Parikh
Nov 22 at 14:59




You are wrong brother.. We can have variable input size to a model and I have already built one such model. Please reconsider the question and don't give a downvote unnecessarily. Rather follow it up if you don't have an answer.
– Kadam Parikh
Nov 22 at 14:59












BTW... the question is not dependent on my code snippet. I want to know the way of how to create a tensor of variable shape (take the example of shape (batch_size, 600)). Can you answer this? Don't downvote. Because of you and other such people, we don't get answers here.
– Kadam Parikh
Nov 22 at 15:01




BTW... the question is not dependent on my code snippet. I want to know the way of how to create a tensor of variable shape (take the example of shape (batch_size, 600)). Can you answer this? Don't downvote. Because of you and other such people, we don't get answers here.
– Kadam Parikh
Nov 22 at 15:01












I'm not your brother, and you cannot have a variable size for the model, the kernel sizes are fixed. If you have a double batch_size, then that's different, you are still feeding the same stuff inside a fixedmodel.
– Matthieu Brucher
Nov 22 at 16:08






I'm not your brother, and you cannot have a variable size for the model, the kernel sizes are fixed. If you have a double batch_size, then that's different, you are still feeding the same stuff inside a fixedmodel.
– Matthieu Brucher
Nov 22 at 16:08














Use something like: batch_size = Y.get_shape()[0] where Y is a tensor that has a batch_size of None. Then it seems like the dynamic shape is properly set up in the tensors that use this for the sizes (at least this worked for my GANs.)
– Matthieu Brucher
Nov 22 at 16:15




Use something like: batch_size = Y.get_shape()[0] where Y is a tensor that has a batch_size of None. Then it seems like the dynamic shape is properly set up in the tensors that use this for the sizes (at least this worked for my GANs.)
– Matthieu Brucher
Nov 22 at 16:15

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53433488%2fhow-to-create-a-tensor-without-knowing-shape-upfront-in-keras-tf%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53433488%2fhow-to-create-a-tensor-without-knowing-shape-upfront-in-keras-tf%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Trompette piccolo

Slow SSRS Report in dynamic grouping and multiple parameters

Simon Yates (cyclisme)