查看: 83|回复: 1

[项目提交] 《2024 DigiKey 汽车应用创意挑战赛》基于树莓派的车载视觉...

[复制链接]
  • TA的每日心情
    开心
    2025-1-10 10:58
  • 签到天数: 1 天

    连续签到: 1 天

    [LV.1]初来乍到

    发表于 2025-1-10 11:10:08 | 显示全部楼层 |阅读模式
    分享到:
    一、项目名称:
    基于树莓派的车载视觉分析不确定性控制系统
    二、项目概述:
    随着计算机技术的发展和近年来深度学习方法的突破,自动驾驶技术在技术支持和社会对驾驶舒适度要求的逐步提高下得到了蓬勃的发展,并逐渐将重心放在以计算机视觉为基础的深度学习算法研究中。
         本项目使用树莓派作为边缘计算网关,搭载了一种原先属于自动驾驶深度学习算法的端到端的卷积神经网络模型作为小车行驶算法,以摄像头采集得到的道路图像作为网络模型输入,小车的驾驶操作作为模型计算输出的智能小车,进行自动驾驶深度学习算法移植到智能小车上的可行性测试。设计最终实现了智能小车在实验道路上遭受外力推动、光照不均等干扰条件下的稳定自动驾驶。本设计系统由树莓派5作为主控,并与PC机进行无线网络通讯,完成了小车遥控和图像采集、图片处理、CNN建立和训练、小车网络加载和自动驾驶。
    整个设计过程包括电子系统的设计技术及调试技术,包括需求分析,制版,器件采购,安装,焊接,硬件调试,软件模块编写,软件模块测试,系统整体测试等整个开发调试过程。
    三、作品实物图

    五、项目文档

    车道检测的简要步骤

    车道检测的简要流程


    捕获和解码视频文件:我们将使用 VideoFileClip 对象捕获视频,并在初始化捕获后对每个视频帧进行解码(即转换为一系列图像)。

    图像的灰度转换:视频帧是RGB格式,将RGB转换为灰度是因为处理单通道图像比处理三通道彩色图像更快。

    降低噪音:噪音会产生假边缘,因此在进一步处理之前,必须进行图像平滑处理。高斯模糊用于执行此过程。高斯模糊是一种典型的图像过滤技术,用于降低噪音和增强图像特性。权重是使用高斯分布选择的,每个像素都会受到考虑其周围像素的加权平均值的影响。通过减少高频元素并提高整体图像质量,这种模糊技术可以创建更柔和、更视觉上令人愉悦的图像。
    Canny 边缘检测器:它计算模糊图像各个方向的梯度,并追踪强度变化较大的边缘。

    感兴趣区域:此步骤仅考虑车道覆盖的区域。这里创建一个蒙版,其尺寸与我们的道路图像相同。此外,在我们的 Canny 图像的每个像素和此蒙版之间执行按位 AND 运算。它最终遮盖 Canny 图像并显示蒙版多边形轮廓所描绘的感兴趣区域。

    霍夫线变换:在图像处理中,霍夫变换是一种特征提取方法,用于查找线和圆等基本几何对象。通过将图片空间转换为参数空间,可以通过累积投票点来识别形状。我们将在算法中使用概率霍夫线变换。霍夫变换已扩展为使用概率霍夫变换来解决计算复杂性。为了加快处理速度同时保持形状检测的准确性,它会随机选择一组图片点并仅对这些点应用霍夫变换。

    视频帧标注:使用霍夫线变换识别我们感兴趣的领域中的车道线后,我们将它们叠加在我们的视觉输入(视频流/图像)上。

    具体操作步骤字太多了 (见附件pdf)
    步骤 1:安装 OpenCVpip install -q opencv-python
    2 步:导入必要的库# Libraries for working with image processing
    import numpy as np
    import pandas as pd
    import cv2
    from google.colab.patches import cv2_imshow
    # Libraries needed to edit/save/watch video clips
    from moviepy import editor
    import moviepy
    步骤3:定义视频预处理函数def process_video(test_video, output_video):
    """
    Read input video stream and produce a video file with detected lane lines.
    Parameters:
    test_video: location of input video file
    output_video: location where output video file is to be saved
    """
    # read the video file using VideoFileClip without audio
    input_video = editor.VideoFileClip(test_video, audio=False)
    # apply the function "frame_processor" to each frame of the video
    # will give more detail about "frame_processor" in further steps
    # "processed" stores the output video
    processed = input_video.fl_image(frame_processor)
    # save the output video stream to an mp4 file
    processed.write_videofile(output_video, audio=False)

    步骤 4:定义车道检测函数对视频中的每一帧图片进行车道检测。
    def frame_processor(image):
    """
    Process the input frame to detect lane lines.
    Parameters:
    image: image of a road where one wants to detect lane lines
    (we will be passing frames of video to this function)
    """
    # convert the RGB image to Gray scale
    grayscale = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    # applying gaussian Blur which removes noise from the image
    # and focuses on our region of interest
    # size of gaussian kernel
    kernel_size = 5
    # Applying gaussian blur to remove noise from the frames
    blur = cv2.GaussianBlur(grayscale, (kernel_size, kernel_size), 0)
    # first threshold for the hysteresis procedure
    low_t = 50
    # second threshold for the hysteresis procedure
    high_t = 150
    # applying canny edge detection and save edges in a variable
    edges = cv2.Canny(blur, low_t, high_t)
    # since we are getting too many edges from our image, we apply
    # a mask polygon to only focus on the road
    # Will explain Region selection in detail in further steps
    region = region_selection(edges)
    # Applying hough transform to get straight lines from our image
    # and find the lane lines
    # Will explain Hough Transform in detail in further steps
    hough = hough_transform(region)
    #lastly we draw the lines on our resulting frame and return it as output
    result = draw_lane_lines(image, lane_lines(image, hough))
    return result

    输出:

    步骤5:区域选择到目前为止,我们已经将帧从 RGB 转换为灰度,应用高斯模糊来降低噪音,并使用了 Canny 边缘检测。接下来,我们将选择要检测车道的区域。
    def region_selection(image):
    """
    Determine and cut the region of interest in the input image.
    Parameters:
    image: we pass here the output from canny where we have
    identified edges in the frame
    """
    # create an array of the same size as of the input image
    mask = np.zeros_like(image)
    # if you pass an image with more then one channel
    if len(image.shape) > 2:
    channel_count = image.shape[2]
    ignore_mask_color = (255,) * channel_count
    # our image only has one channel so it will go under "else"
    else:
    # color of the mask polygon (white)
    ignore_mask_color = 255
    # creating a polygon to focus only on the road in the picture
    # we have created this polygon in accordance to how the camera was placed
    rows, cols = image.shape[:2]
    bottom_left = [cols * 0.1, rows * 0.95]
    top_left = [cols * 0.4, rows * 0.6]
    bottom_right = [cols * 0.9, rows * 0.95]
    top_right = [cols * 0.6, rows * 0.6]
    vertices = np.array([[bottom_left, top_left, top_right, bottom_right]], dtype=np.int32)
    # filling the polygon with white color and generating the final mask
    cv2.fillPoly(mask, vertices, ignore_mask_color)
    # performing Bitwise AND on the input image and mask to get only the edges on the road
    masked_image = cv2.bitwise_and(image, mask)
    return masked_image

    输出:








    项目成果提交.pdf

    694.83 KB, 下载次数: 1

    回复

    使用道具 举报

  • TA的每日心情
    开心
    2025-1-10 10:58
  • 签到天数: 1 天

    连续签到: 1 天

    [LV.1]初来乍到

     楼主| 发表于 2025-1-10 11:10:39 | 显示全部楼层
    步骤 6:霍夫变换识别直线
    现在我们将使用概率霍夫变换识别上述函数输出图像中的直线
    def hough_transform(image):
            """
            Determine and cut the region of interest in the input image.
            Parameter:
                    image: grayscale image which should be an output from the edge detector
            """
            # Distance resolution of the accumulator in pixels.
            rho = 1                       
            # Angle resolution of the accumulator in radians.
            theta = np.pi/180
            # Only lines that are greater than threshold will be returned.
            threshold = 20       
            # Line segments shorter than that are rejected.
            minLineLength = 20
            # Maximum allowed gap between points on the same line to link them
            maxLineGap = 500       
            # function returns an array containing dimensions of straight lines
            # appearing in the input image
            return cv2.HoughLinesP(image, rho = rho, theta = theta, threshold = threshold,
                                                    minLineLength = minLineLength, maxLineGap = maxLineGap)

    输出:
    [[[284 180 382 278]]

    [[281 180 379 285]]

    [[137 274 183 192]]

    [[140 285 189 188]]

    [[313 210 388 285]]

    [[139 285 188 188]]

    [ [132 282 181 194]]

    [[146 285 191 196]]

    [[286 187 379 284]]]

    步骤 7:在视频帧上绘制线条
    现在我们已经使用霍夫变换获得了坐标,我们将它们绘制在原始图像(框架)上,但正如我们所见,我们得到的坐标超过 2 条线,因此我们将首先找到左右车道的坡度,然后将它们叠加在原始图像上。
    我们在这里定义了 4 个函数来帮助在输入框上绘制左车道和右车道:

    Average_Slope_Intercept:此函数接收霍夫变换线并计算其斜率和截距。如果线的斜率为负,则该线属于左车道,否则该线属于右车道。然后我们计算左车道和右车道的加权平均斜率和截距。
    Pixel_Points:通过使用直线的斜率、截距和 y 值,我们找到直线的 x 值,并以整数形式返回车道的 x 和 y 坐标。
    Lane_Lines:调用Average_Slope_Intercept和Pixel Points并计算右车道和左车道坐标的函数。
    Draw_Lane_Lines:此函数在输入帧上绘制道路的左车道和右车道。返回输出帧,然后将其存储在我们的驱动程序函数“process_video”中的变量“processed”中。

    def average_slope_intercept(lines):

            """
            Find the slope and intercept of the left and right lanes of each image.
            Parameters:
                    lines: output from Hough Transform
            """
            left_lines = [] #(slope, intercept)
            left_weights = [] #(length,)
            right_lines = [] #(slope, intercept)
            right_weights = [] #(length,)
           
            for line in lines:
                    for x1, y1, x2, y2 in line:
                            if x1 == x2:
                                    continue
                            # calculating slope of a line
                            slope = (y2 - y1) / (x2 - x1)
                            # calculating intercept of a line
                            intercept = y1 - (slope * x1)
                            # calculating length of a line
                            length = np.sqrt(((y2 - y1) ** 2) + ((x2 - x1) ** 2))
                            # slope of left lane is negative and for right lane slope is positive
                            if slope < 0:
                                    left_lines.append((slope, intercept))
                                    left_weights.append((length))
                            else:
                                    right_lines.append((slope, intercept))
                                    right_weights.append((length))
            #
            left_lane = np.dot(left_weights, left_lines) / np.sum(left_weights) if len(left_weights) > 0 else None
            right_lane = np.dot(right_weights, right_lines) / np.sum(right_weights) if len(right_weights) > 0 else None
            return left_lane, right_lane

    def pixel_points(y1, y2, line):
            """
            Converts the slope and intercept of each line into pixel points.
                    Parameters:
                            y1: y-value of the line's starting point.
                            y2: y-value of the line's end point.
                            line: The slope and intercept of the line.
            """
            if line is None:
                    return None
            slope, intercept = line
            x1 = int((y1 - intercept)/slope)
            x2 = int((y2 - intercept)/slope)
            y1 = int(y1)
            y2 = int(y2)
            return ((x1, y1), (x2, y2))

    def lane_lines(image, lines):
            """
            Create full lenght lines from pixel points.
                    Parameters:
                            image: The input test image.
                            lines: The output lines from Hough Transform.
            """
            left_lane, right_lane = average_slope_intercept(lines)
            y1 = image.shape[0]
            y2 = y1 * 0.6
            left_line = pixel_points(y1, y2, left_lane)
            right_line = pixel_points(y1, y2, right_lane)
            return left_line, right_line

           
    def draw_lane_lines(image, lines, color=[255, 0, 0], thickness=12):
            """
            Draw lines onto the input image.
                    Parameters:
                            image: The input test image (video frame in our case).
                            lines: The output lines from Hough Transform.
                            color (Default = red): Line color.
                            thickness (Default = 12): Line thickness.
            """
            line_image = np.zeros_like(image)
            for line in lines:
                    if line is not None:
                            cv2.line(line_image, *line, color, thickness)
            return cv2.addWeighted(image, 1.0, line_image, 1.0, 0.0)

    输出:
    回复 支持 反对

    使用道具 举报

    您需要登录后才可以回帖 注册/登录

    本版积分规则

    关闭

    站长推荐上一条 /1 下一条

    手机版|小黑屋|与非网

    GMT+8, 2025-1-28 01:20 , Processed in 0.150601 second(s), 19 queries , MemCache On.

    ICP经营许可证 苏B2-20140176  苏ICP备14012660号-2   苏州灵动帧格网络科技有限公司 版权所有.

    苏公网安备 32059002001037号

    Powered by Discuz! X3.4

    Copyright © 2001-2024, Tencent Cloud.