I try to use the code below to get the ocr result.
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
""" An example of predicting CAPTCHA image data with a LSTM network pre-trained with a CTC loss"""
from __future__ import print_function
This file has been truncated. show original
But I find that I can only input a fixed number of images as input. Is there a way to change the batch size each time when inference?
Thanks in advance.
indu
May 21, 2018, 8:09pm
2
When a graph is created, it is created with a certain input shape which includes the batch size.
You can use a lower batch size by zeroing out some of the inputs.
If you want to use a higher batch size you have to reshape which will recreate the graph with the new input shape.
Thanks for reply. I found BucketingModule can solve this task as well. Which method do you think is faster and take less resource as I want to run it on the Jetson TX1? Thank you very much.
indu
May 31, 2018, 7:58pm
4
If you are using RNN, you can use BucketingModule and it is faster.