探索使用科大讯飞语音包控制机器狗(三)

编程入门 行业动态 更新时间:2024-10-27 14:23:16

探索使用科大讯<a href=https://www.elefans.com/category/jswz/34/1763029.html style=飞语音包控制机器狗(三)"/>

探索使用科大讯飞语音包控制机器狗(三)

之前已经初步探索了语音识别的可能性以及单独控制机器狗运动的脚本编写,现在探索使用语音识别初步控制小狗运动,由于树莓派收声部分还有些问题,所以初步探索使用录入的音频文件进行识别。

编写发布器

编写发布器发布语音识别的结果。

源码

只展示主函数的部分,需要注意的地方是,对于音频文件的存放路径

int main(int argc, char* argv[])
{ros::init(argc, argv, "voiceRecognition");ros::NodeHandle n;ros::Rate loop_rate(10);ros::Publisher voiceWordsPub = n.advertise<std_msgs::String>("voiceWords", 1000);  int ret = MSP_SUCCESS;int upload_on =	1; /* whether upload the user word *//* login params, please do keep the appid correct */const char* login_params = "appid = 5fa8bc98, work_dir = .";int aud_src = 0; /* from mic or file */int count =0;while(ros::ok()){/** See "iFlytek MSC Reference Manual"*/const char* session_begin_params ="sub = iat, domain = iat, language = zh_cn, ""accent = mandarin, sample_rate = 16000, ""result_type = plain, result_encoding = utf8";/* Login first. the 1st arg is username, the 2nd arg is password* just set them as NULL. the 3rd arg is login paramertes * */ret = MSPLogin(NULL, NULL, login_params);if (MSP_SUCCESS != ret)	{printf("MSPLogin failed , Error code %d.\n",ret);goto exit; // login fail, exit the program}//是否上传用户词库printf("Want to upload the user words ? \n0: No.\n1: Yes\n");scanf("%d", &upload_on);if (upload_on){printf("Uploading the user words ...\n");ret = upload_userwords();if (MSP_SUCCESS != ret)goto exit;	printf("Uploaded successfully\n");}//语音输入的音频来自于什么地方,可以选择音频文件或者麦克风实时输入printf("Where the audio comes from?\n""0: From a audio file.\n1: From microphone.\n");scanf("%d", &aud_src);if(aud_src != 0) {printf("Demo recognizing the speech from microphone\n");printf("Speak in 15 seconds\n");demo_mic(session_begin_params);printf("15 sec passed\n");} else {printf("Demo recgonizing the speech from a recorded audio file\n");demo_file("/home/liuda/spotmicro/src/spotMicro/robot_voice/bin/wav/iflytek02.wav", session_begin_params); //注意音频文件的存放位置,移植代码的时候注意修改}MSPLogout();//发布消息,当识别到语音时发布识别结果std_msgs::String msg;if(resultFlag){resultFlag=0;msg.data = g_result;voiceWordsPub.publish(msg);printf("pub\n");}//没有识别到语音时发布设定好的错误信息else{std::stringstream ss;ss << "nothing";msg.data = ss.str();voiceWordsPub.publish(msg);}ros::spinOnce();loop_rate.sleep();count++;}
exit:MSPLogout(); // Logout...return 0;
}
  • 录制音频文件
    由于目前实时语音识别的麦克风取音有些问题,所以暂时先使用录入音频文件的方式进行语音控制功能的验证。对此,需要注意科大讯飞语音识别如果以音频文件作为输入使用的话对于其文件格式是有着一定的要求的,录制要求详见。录制应用我使用的audacity,效果还可以。经过实际的语音识别测试,单个关键词的话识别准确性并不是很高,比如将walk识别成work,后续考虑可以使用用户词库或者采用关键词识别的控制方式。

初步录制了三个不同的音频文件,代表三种不同的工作模式。

  • stand.wav
  • walk.wav
  • rest.wav

为了方便演示,考虑在发布器中编写不同的模式以便发布不同的语音消息去控制机器狗。

修改后的代码

  • iat_publish.cpp

主要修改的地方,缩短了麦克风的输入时间,减为5秒钟,增加了不同的音频文件输入输入路径,可识别三种不同的指令形式。

int main(int argc, char* argv[]){// 初始化ROSros::init(argc, argv, "voiceRecognition");ros::NodeHandle n;ros::Rate loop_rate(10);// 声明Publisher和Subscriber// 订阅唤醒语音识别的信号ros::Subscriber wakeUpSub = n.subscribe("voiceWakeup", 1000, WakeUp);   // 发布语音识别结果消息的信号    ros::Publisher voiceWordsPub = n.advertise<std_msgs::String>("voiceWords", 1000);  ROS_INFO("Sleeping...");int count=0;int aud_src=0;while(ros::ok()){// 语音识别唤醒if (wakeupFlag){ROS_INFO("Wakeup...");int ret = MSP_SUCCESS;const char* login_params = "appid = 5fa8bc98, work_dir = .";const char* session_begin_params ="sub = iat, domain = iat, language = zh_cn, ""accent = mandarin, sample_rate = 16000, ""result_type = plain, result_encoding = utf8";ret = MSPLogin(NULL, NULL, login_params);if(MSP_SUCCESS != ret){MSPLogout();printf("MSPLogin failed , Error code %d.\n",ret);}printf("Where the audio comes from?\n""0: From a audio file stand.wav.\n1: From microphone.\n2: From a audio file walk.wav.\n3: From a audio file rest.wav.\n");scanf("%d", &aud_src);if(aud_src == 1) {printf("Demo recognizing the speech from microphone\n");printf("Speak in 5 seconds\n");demo_mic(session_begin_params);printf("5 sec passed\n");} else if(aud_src == 0) {printf("Demo recgonizing the speech from a recorded audio file\n");demo_file("/home/liuda/spotmicro/src/spotMicro/robot_voice/bin/wav/stand.wav", session_begin_params); }else if(aud_src == 2) {printf("Demo recgonizing the speech from a recorded audio file\n");demo_file("/home/liuda/spotmicro/src/spotMicro/robot_voice/bin/wav/walk.wav", session_begin_params); }else if(aud_src == 3) {printf("Demo recgonizing the speech from a recorded audio file\n");demo_file("/home/liuda/spotmicro/src/spotMicro/robot_voice/bin/wav/rest.wav", session_begin_params); }//wakeupFlag=0;MSPLogout();}// 语音识别完成std_msgs::String msg;if(resultFlag){resultFlag=0;msg.data = g_result;voiceWordsPub.publish(msg);printf("pub\n");}else{std::stringstream ss;ss << "nothing";msg.data = ss.str();voiceWordsPub.publish(msg);}ros::spinOnce();loop_rate.sleep();count++;}exit:MSPLogout(); // Logout...return 0;}

编写订阅器

在语音识别功能正常使用的前提下编写订阅器来订阅语音识别到的消息并来控制机器狗的运动。

  • voice_control.py
    此脚本实现的功能是订阅voiceWords话题发布的消息,然后对消息指令进行判别,如果是相应的指令判别成功就控制机器狗实现相应的动作。目前实现的动作是站立,蹲下以及向前行走5秒钟。
#!/usr/bin/python
import rospy
import sched, time
import sys, select, termios, tty # For terminal keyboard key press reading
from std_msgs.msg import Float32, Bool, String
from geometry_msgs.msg import Vector3
from math import pimsg = """voice_control  function descripetionstart listeningCTRL-C to quit
"""speed_inc = 0.012
yaw_rate_inc = 1.5*pi/180
angle_inc = 2*pi/180stand_s = """Stand."""
walk_s = """Work."""
idle_s = """Rest."""stand_flag = 0
walk_flag = 0
idle_flag = 0class SpotMicroKeyboardControl():def __init__(self):# Create messages for body motion commands, and initialize to zero self._speed_cmd_msg = Vector3()self._speed_cmd_msg.x = 0self._speed_cmd_msg.y = 0self._speed_cmd_msg.z = 0self._walk_event_cmd_msg = Bool()self._walk_event_cmd_msg.data = True # Mostly acts as an event driven action on receipt of a true messageself._stand_event_cmd_msg = Bool()self._stand_event_cmd_msg.data = Trueself._idle_event_cmd_msg = Bool()self._idle_event_cmd_msg.data = Truerospy.loginfo("Setting Up the Spot Micro Voice Control Node...")# Set up and title the ros node for this coderospy.init_node('spot_micro_keyboard_control')#rospy.Subscriber('voiceWords', String, callbackv)#rospy.spin()# Create publishers for x,y speed command, body rate command, and state commandself.ros_pub_speed_cmd      = rospy.Publisher('/speed_cmd',Vector3,queue_size=1)self.ros_pub_walk_cmd       = rospy.Publisher('/walk_cmd',Bool, queue_size=1)self.ros_pub_stand_cmd      = rospy.Publisher('/stand_cmd',Bool,queue_size=1)self.ros_pub_idle_cmd       = rospy.Publisher('/idle_cmd',Bool,queue_size=1)rospy.loginfo("> Publishers corrrectly initialized")rospy.loginfo("Initialization complete")# Setup terminal input reading, taken from teleop_twist_keyboardself.settings = termios.tcgetattr(sys.stdin)def reset_all_motion_commands_to_zero(self):'''Reset body motion cmd states to zero and publish zero value body motion commands'''self._speed_cmd_msg.x = 0self._speed_cmd_msg.y = 0self._speed_cmd_msg.z = 0self.ros_pub_speed_cmd.publish(self._speed_cmd_msg)def run(self):global stand_flag, walk_flag, idle_flagdef callbackv(data):global stand_flag, walk_flag, idle_flagrospy.loginfo(rospy.get_caller_id() + 'I heard %s', data.data)if (data.data == stand_s):stand_flag = 1if (data.data == walk_s):walk_flag = 1if (data.data == idle_s):idle_flag = 1self.reset_all_motion_commands_to_zero()#publish the stand modeself.ros_pub_stand_cmd.publish(self._stand_event_cmd_msg)rospy.Subscriber('voiceWords', String, callbackv)while not rospy.is_shutdown():#print(msg)rospy.loginfo('wait for the commond ')time.sleep(1)if (stand_flag == 1):self.ros_pub_stand_cmd.publish(self._stand_event_cmd_msg)print('stand mode')time.sleep(5)stand_flag = 0elif(idle_flag):self.ros_pub_idle_cmd.publish(self._idle_event_cmd_msg)print('idle mode')idle_flag = 0elif(walk_flag):self.ros_pub_stand_cmd.publish(self._stand_event_cmd_msg)time.sleep(1)self.reset_all_motion_commands_to_zero()self.ros_pub_walk_cmd.publish(self._walk_event_cmd_msg)print('walk mode')time.sleep( 0.1 )self._speed_cmd_msg.x = 0.05self.ros_pub_speed_cmd.publish(self._speed_cmd_msg)print('Cmd Values: x speed: %1.3f m/s, y speed: %1.3f m/s '\%(self._speed_cmd_msg.x,self._speed_cmd_msg.y)) time.sleep( 5 )self._speed_cmd_msg.x = 0self._speed_cmd_msg.y = 0self._speed_cmd_msg.z = 0self.ros_pub_speed_cmd.publish(self._speed_cmd_msg)self.ros_pub_stand_cmd.publish(self._stand_event_cmd_msg)walk_flag = 0if __name__ == "__main__":smkc     = SpotMicroKeyboardControl()smkc.run()
  • 代码编写过程遇到的问题

  • 1.全局变量
    stand_flag,walk_flag,idle_flag报错显示变量会在声明前引用,后来百度是因为变量在定义的函数中赋值的话会出现这种问题,声明其为全局变量即可解决问题。

global stand_flag, walk_flag, idle_flag
  • 2.callbackv函数显示未定义



这个问题并没有找到原因所在,不过后来修改了代码结构,把callbackv函数的定义放在了run函数中就解决了问题。

def run(self):global stand_flag, walk_flag, idle_flagdef callbackv(data):global stand_flag, walk_flag, idle_flagrospy.loginfo(rospy.get_caller_id() + 'I heard %s', data.data)

总结

目前可以实现的功能是识别录制的音频文件并且控制机器狗做简单的动作,可以验证语音控制机器狗的可实现性,不过还有很多方面需要继续探索研究。

  • 语音输入方面:
    目前是通过识别提前存入到树莓派的音频文件进行语音交互,主要原因还是在于树莓派自身的硬件限制,树莓派自带3.5mm耳机接口,不过和普通的耳机接口的线序区别较大,是一个AV接口,用此接口不仅可以输出声音,还可以输出图像。我有买过一个树莓派麦克风输入设备,不过经过测试此设备不能够进行实时的音频输入,只能进行音频录制,而且录入格式和科大讯飞可识别的格式区别挺大,我的实测效果并不好。


    所以我考虑参照此教程进行实时语音输入输出的探索,相当于再专门加一个板子进行音频的输入输出。效果应该比使用usb麦克风和声卡的效果要好。

  • 识别成功率方面:
    目前经过电脑端简单的测试,对于单个单词的识别率较低,对于句子的识别率较高,还是看需求吧,如果需要机器狗对于关键词的识别的话可以考虑使用科大讯飞的关键词识别SDK,对于指令性的控制效果肯定会更好,但是要想进行实时语音交互的话可能需要考虑更多的情况。这一部分也还需要进一步的探索检验。

  • 语音控制机器狗运动方面:
    目前对于机器狗的运动的控制还停留在比较表面的层次,目前我所做的代码修改全部都是基于原先的键盘控制机器狗的运动,对于狗的控制也就是运动的状态,站立,行走,蹲下等,行走的话控制机器狗的行走方向和速度。要想实现机器狗的自动寻路在控制脚本部分也需要更加深入的研究。

更多推荐

探索使用科大讯飞语音包控制机器狗(三)

本文发布于:2024-03-13 10:54:16,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1733831.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:飞语   科大   机器

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!