如何设置 Tensorflow Lite C++ 的输入

编程入门 行业动态 更新时间:2024-10-28 20:19:53
本文介绍了如何设置 Tensorflow Lite C++ 的输入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时送ChatGPT账号..

我正在尝试使用 TensorflowLite 模型测试简单的 tensorflow lite c++ 代码.它得到两个浮点数并做异或.但是,当我更改输入时,输出不会更改.我猜行 interpreter->typed_tensor(0)[0] = x 是错误的,所以输入没有正确应用.我应该如何更改代码才能工作?

I'm trying to test simple tensorflow lite c++ code with TensorflowLite model. It gets two floats and do xor. However when I change inputs, output doesn't change. I guess the line interpreter->typed_tensor<float>(0)[0] = x is wrong so inputs aren't properly applied. How should I change the code to work?

这是我的代码

#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <vector>
#include "tensorflow/contrib/lite/kernels/register.h"
#include "tensorflow/contrib/lite/model.h"
#include "tensorflow/contrib/lite/string_util.h"
#include "tensorflow/contrib/lite/tools/mutable_op_resolver.h"

int main(){
        const char graph_path[14] = "xorGate.lite";
        const int num_threads = 1;
        std::string input_layer_type = "float";
        std::vector<int> sizes = {2};
        float x,y;

        std::unique_ptr<tflite::FlatBufferModel> model(
                tflite::FlatBufferModel::BuildFromFile(graph_path));

        if(!model){
                printf("Failed to mmap model\n")
                exit(0);
        }

        tflite::ops::builtin::BuiltinOpResolver resolver;
        std::unique_ptr<tflite::Interpreter> interpreter;
        tflite::InterpreterBuilder(*model, resolver)(&interpreter);

        if(!interpreter){
                printf("Failed to construct interpreter\n");
                exit(0);
        }
        interpreter->UseNNAPI(false);

        if(num_threads != 1){
                interpreter->SetNumThreads(num_threads);
        }

        int input = interpreter->inputs()[0];
        interpreter->ResizeInputTensor(0, sizes);

        if(interpreter->AllocateTensors() != kTfLiteOk){
                printf("Failed to allocate tensors\n");
                exit(0);
        }

        //read two numbers

        std::printf("Type two float numbers : ");
        std::scanf("%f %f", &x, &y);
        interpreter->typed_tensor<float>(0)[0] = x;
        interpreter->typed_tensor<float>(0)[1] = y;

        printf("hello\n");
        fflush(stdout);
        if(interpreter->Invoke() != kTfLiteOk){
                std::printf("Failed to invoke!\n");
                exit(0);
        }
        float* output;
        output = interpreter->typed_output_tensor<float>(0);
        printf("output = %f\n", output[0]);
        return 0;
}

这是我运行代码时出现的消息.

This is a message comes out when i run the code.

root@localhost:/home# ./a.out 
nnapi error: unable to open library libneuralnetworks.so
Type two float numbers : 1 1
hello
output = 0.112958
root@localhost:/home# ./a.out 
nnapi error: unable to open library libneuralnetworks.so
Type two float numbers : 0 1
hello
output = 0.112958

推荐答案

通过更改解决

interpreter->typed_tensor<float>(0)[0] = x;

interpreter->typed_input_tensor<float>(0)[0] = x;

这篇关于如何设置 Tensorflow Lite C++ 的输入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

更多推荐

[db:关键词]

本文发布于:2023-05-01 00:38:22,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1401827.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:如何设置   Tensorflow   Lite

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!