机器人:使用模拟器完成自动采石任务

编程入门 行业动态 更新时间:2024-10-10 21:32:03

机器人:使用<a href=https://www.elefans.com/category/jswz/34/1769563.html style=模拟器完成自动采石任务"/>

机器人:使用模拟器完成自动采石任务

最近在学udacity的机器人课程, 陆续会将学到的内容总结成笔记发上来, 今天介绍第一个作业项目。
要完成这一项目需要以下准备工作:

  1. 项目使用的模拟器(MacOS, Linux, Windows);
  2. 运行代码的工作环境
  3. 项目源码

项目介绍

本项目的任务是在模拟器中操作小车自动导航并寻找和采集地图中的矿石。本章分为两个部分:第一部分介绍感知, 通过小车的摄像头记录当前位置的路况景象,对图像处理标注障碍物,道路和矿石的坐标位置;第二部分介绍决策和控制, 主要讲使用第一部分的输出进行决策并控制小车在地图中采集矿石的过程。

感知

  1. 处理图像
    小车在模拟器中的视野如下:

    通过分析照片,石头是黄色, 道路是浅色,深色的山就是行驶的障碍。
# Identify pixels above the threshold
# Threshold of RGB > 160 does a nice job of identifying ground pixels only
def color_thresh(img, rgb_thresh=(160, 160, 160)):# Create an array of zeros same xy size as img, but single channelnavigable_layer = np.zeros_like(img[:,:,0])obstacle_layer = np.zeros_like(img[:,:,0])sampleRocks_layer = np.zeros_like(img[:,:,0])# Threshold the imagesampleRocks_thresh = (20,100,100)hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)navigable =   (img[:,:,0] > rgb_thresh[0]) \& (img[:,:,1] > rgb_thresh[1]) \& (img[:,:,2] > rgb_thresh[2])sampleRocks = (hsv[:, :, 0] > sampleRocks_thresh[0]) \& (hsv[:, :, 1] > sampleRocks_thresh[1]) \& (hsv[:, :, 2] > sampleRocks_thresh[2])obstacle =    (img[:, :, 0] <= rgb_thresh[0]) \& (img[:, :, 1] <= rgb_thresh[1]) \& (img[:, :, 2] <= rgb_thresh[2])# Return the binary images for ground, rock and obstaclesnavigable_layer[navigable] = 1return  navigable_layer, obstacle, sampleRocks


由于矿石,道路和山脉的颜色对比比较鲜明,所以设置合适的阈值后,很容易将三者分离出来。为了规划小车行进的路线,需要从摄像头视角转换为更容易控制小车导航的地图视角,即根据地图坐标系标注小车位置,用极坐标控制小车前进的方向。

坐标转换

  1. 鸟瞰图
    从小车视角观察,道路总是近大远小,但实际的道路应该是等宽的,所以需要将道路转换为鸟瞰图。
# Define a function to perform a perspective transform
# I've used the example grid image above to choose source points for the
# grid cell in front of the rover (each grid cell is 1 square meter in the sim)
# Define a function to perform a perspective transform
def perspect_transform(img, src, dst):           M = cv2.getPerspectiveTransform(src, dst)warped = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))# keep same size as input imagereturn warped

2. 以车为中心的笛卡尔平面
把鸟瞰图转换为以车为中心的笛卡尔坐标,通过这一步转换,得到以小车为原点的周围路况的坐标分布,方便小车决策以调整行驶角度和速度。

# Define a function to convert from image coords to rover coords
def rover_coords(binary_img):# Identify nonzero pixelsypos, xpos = binary_img.nonzero()print(ypos, xpos)# Calculate pixel positions with reference to the rover position being at the # center bottom of the image.  x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)y_pixel = -(xpos - binary_img.shape[1] / 2).astype(np.float)return x_pixel, y_pixel
# Grab another random image
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])warped = perspect_transform(image, source, destination)
navigable,obstacle,sampleRocks = color_thresh(warped)# Calculate pixel values in rover-centric coords and distance/angle to all pixels
xpix, ypix = rover_coords(navigable)

  1. 基于地图的坐标
    这一步把以小车为中心的笛卡尔坐标映射到地图坐标中, 这样可以将小车在地图中的位置也显示出来, 跟容易跟踪小车的情况。
# Define a function to map rover space pixels to world space
def rotate_pix(xpix, ypix, yaw):# Convert yaw to radiansyaw_rad = yaw * np.pi / 180xpix_rotated = (xpix * np.cos(yaw_rad)) - (ypix * np.sin(yaw_rad))ypix_rotated = (xpix * np.sin(yaw_rad)) + (ypix * np.cos(yaw_rad))# Return the result  return xpix_rotated, ypix_rotateddef translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale): # Apply a scaling and a translationxpix_translated = (xpix_rot / scale) + xposypix_translated = (ypix_rot / scale) + ypos# Return the result  return xpix_translated, ypix_translated
# Define a function to apply rotation and translation (and clipping)
# Once you define the two functions above this function should work
def pix_to_world(xpix, ypix, xpos, ypos, yaw, world_size, scale):# Apply rotationxpix_rot, ypix_rot = rotate_pix(xpix, ypix, yaw)# Apply translationxpix_tran, ypix_tran = translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale)# Clip to world_sizex_pix_world = np.clip(np.int_(xpix_tran), 0, world_size - 1)y_pix_world = np.clip(np.int_(ypix_tran), 0, world_size - 1)# Return the resultreturn x_pix_world, y_pix_world
  1. 用极坐标决定导航方向
    将以车为中心的坐标转为极坐标,得到道路相对于小车的角度和距离,道路的像素点对应的角度取平均值作为小车前进的方向,注意, 小车的转向角度应该有个限制。
# Define a function to apply rotation and translation (and clipping)
# Once you define the two functions above this function should work
def pix_to_world(xpix, ypix, xpos, ypos, yaw, world_size, scale):# Apply rotationxpix_rot, ypix_rot = rotate_pix(xpix, ypix, yaw)# Apply translationxpix_tran, ypix_tran = translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale)# Clip to world_sizex_pix_world = np.clip(np.int_(xpix_tran), 0, world_size - 1)y_pix_world = np.clip(np.int_(ypix_tran), 0, world_size - 1)# Return the resultreturn x_pix_world, y_pix_world
image = mpimg.imread('angle_example.jpg')
warped = perspect_transform(image) # Perform perspective transform
colorsel = color_thresh(warped, rgb_thresh=(160, 160, 160)) # Threshold the warped image
xpix, ypix = rover_coords(colorsel)  # Convert to rover-centric coords
distances, angles = to_polar_coords(xpix, ypix) # Convert to polar coords
avg_angle = np.mean(angles) # Compute the average angle
avg_angle_degrees = avg_angle * 180/np.pi
steering = np.clip(avg_angle_degrees, -15, 15)


以上就是第一部分感知的内容, 核心的任务就是把摄像头视角的图片中的道路和矿石映射到以小车为中心的极坐标系中,最终得到小车前进的方向和方向上可以前进的距离作为参数供第二部分决策。

决策

在做决策之前,先创造一个小车类并定义一些辅助做决策的属性。

# Define RoverState() class to retain rover state parameters
class RoverState():def __init__(self):self.start_time = None # To record the start time of navigationself.total_time = None # To record total duration of naviagationself.img = None # Current camera imageself.pos = None # Current position (x, y)self.yaw = None # Current yaw angleself.pitch = None # Current pitch angleself.roll = None # Current roll angleself.vel = None # Current velocityself.steer = 0 # Current steering angleself.throttle = 0 # Current throttle valueself.brake = 0 # Current brake valueself.nav_angles = None # Angles of navigable terrain pixelsself.nav_dists = None # Distances of navigable terrain pixelsself.ground_truth = ground_truth_3d # Ground truth worldmapself.mode = 'forward' # Current mode (can be forward or stop)self.throttle_set = 0.2 # Throttle setting when acceleratingself.brake_set = 10 # Brake setting when braking# The stop_forward and go_forward fields below represent total count# of navigable terrain pixels.  This is a very crude form of knowing# when you can keep going and when you should stop.  Feel free to# get creative in adding new fields or modifying these!self.stop_forward = 50 # Threshold to initiate stoppingself.go_forward = 500 # Threshold to go forward againself.max_vel = 2 # Maximum velocity (meters/second)# Image output from perception step# Update this image to display your intermediate analysis steps# on screen in autonomous modeself.vision_image = np.zeros((160, 320, 3), dtype=np.float) # Worldmap# Update this image with the positions of navigable terrain# obstacles and rock samplesself.worldmap = np.zeros((200, 200, 3), dtype=np.float) self.samples_pos = None # To store the actual sample positionsself.samples_found = 0 # To count the number of samples foundself.near_sample = False # Set to True if within reach of a rock sampleself.pick_up = False # Set to True to trigger rock pickup

做决策的逻辑简单,但代码冗长,所以就不贴源码了,感兴趣的可以下载代码查看,简单总结决策的逻辑就是小车有两种工作模式,运动停止,在运动模式下当前面的可通行路长度超过stop_forward,则继续前进,否则停止,进入停止模式如果速度大于0.5 则踩刹车并左转,当速度小于0.5只转向,直到可视范围内的道路长度大于500米,开始加速进入前进模式。

控制

根据决策系统发出的angle, throttle和brake指令就可以控制小车在地图中行驶了,在行驶的过程中,同时运行第一部分探测矿石的函数, 当探测到矿石时,到达矿石位置,并为模拟器发送采集矿石命令。如果小车能成功在地图中自动驾驶并采集矿石,则完成了本项目的全部任务。

def telemetry(sid, data):if data:global Rover# Initialize / update Rover with current telemetryRover, image = update_rover(Rover, data)if np.isfinite(Rover.vel):# Execute the perception and decision steps to update the Rover's stateRover = perception_step(Rover)Rover = decision_step(Rover)# Create output images to send to serverout_image_string1, out_image_string2 = create_output_images(Rover)# The action step!  Send commands to the rover!commands = (Rover.throttle, Rover.brake, Rover.steer)send_control(commands, out_image_string1, out_image_string2)# If in a state where want to pickup a rock send pickup commandif Rover.pick_up and not Rover.picking_up:send_pickup()# Reset Rover flagsRover.pick_up = False# In case of invalid telemetry, send null commandselse:# Send zeros for throttle, brake and steer and empty imagessend_control((0, 0, 0), '', '')# If you want to save camera images from autonomous driving specify a path# Example: $ python drive_rover.py image_folder_path# Conditional to save image frame if folder was specifiedif args.image_folder != '':timestamp = datetime.utcnow().strftime('%Y_%m_%d_%H_%M_%S_%f')[:-3]image_filename = os.path.join(args.image_folder, timestamp)image.save('{}.jpg'.format(image_filename))else:sio.emit('manual', data={}, skip_sid=True)

更多推荐

机器人:使用模拟器完成自动采石任务

本文发布于:2024-02-26 22:19:08,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1704166.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:模拟器   机器人

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!