Skip to main content

Guidance control of vehicles based on visual feedback via internet

Abstract

The proposed paper investigates the vehicle guidance control based on the visual information which is given by the webcam mounted on the vehicle. With the help of image processing techniques such as binarization, Canny edge detection method, and Hough transform, the road lines are thus identified, which means the drivable area can be defined. The proposed control scheme adopts a simple algorithm to guide the vehicle to run inside the drivable area within the road lines. The computation of defining drivable area and control algorithm is operated in a control center which connects the vehicle via WiFi wireless communication system. Similarly, the image information transmits back to the control center. Furthermore, the control center can not only monitor the vehicle inside some certain area but also control the vehicle dynamically in real time. To simplify the experimental setup, the drivable area is defined as the superhighway which allows only cars on the road. Two experimental results, one in a straight road and the other in a curved road, are given to demonstrate the effectiveness of the proposed guidance control system.

1 Introduction

Many visual-based control system for vehicle had been developed to detect the dangerous area, obstacle edges, and road lines in literatures [1, 2]. One of the detecting methods is line detection which proposes the drivable area embraced by the road lines; to detect the lines thus defines the drivable area. On the other hand, line detection also provides a comparison basis for safety driving to develop a warning system to avoid collisions [3–6]. Moreover, as wireless communication is being widely applied nowadays, modern vehicles are usually equipped with the wireless communication components such as GPS, webcam, and WiFi modules. Furthermore, control methodologies for the locomotion and turning guidance of a vehicle were also developed to make automatic driving feasible [7, 8].

In addition, during a long journey on the freeway, drivers easily fall asleep during their driving because of the road being usually straight and smooth in a period of time. The proposed vehicle guidance control system is developed based on the visual feedback images in front of the vehicle. When the images are transmitted back to the control center, the image processing program first analyzes those images and identified the road lines; after that, the guidance control algorithm will generate the control signals to keep the vehicle running inside the drivable area, which means to stay within the road lines. Then, the control center will send the control signals to the vehicle to guide the vehicle how to move.

In the image processing process, after binarization, Canny edge detection process and Hough transform are applied to identify the edges of the road lines and plot them on the display. In order to easily calculate the distances between vehicle and road lines, the position of the webcam is mounted on the center of the vehicle with a certain height. By using a simple feedback control technique, the vehicle automatically modifies its yaw angle to navigate the vehicle to move on the right trail inside the drivable area. The proposed visual-based guidance control system also provides two experimental results to validate the performance. One is on the straight road and the other is on the curved road. Both experimental results are satisfactory. Recently, 3D techniques were taken into the design for vehicle guidance to improve the performance of vehicle control with satisfactory performance [8, 9]. On the other hand, the real time issue also played an important role due to the rapid changing of the environment [10]. Therefore, the proposed control scheme for vehicle guidance not only provides the vehicle control based on the visual feedback via the Internet but also considers the real time issue.

2 System structure

2.1 Self-propelled vehicle

The vehicle used in the experimental setup is illustrated in Fig. 1. The vehicle is equipped not only with a self-propelled ability but also with a WiFi module to communicate with the control center and a webcam (DCS-9301) mounted on the center of the vehicle to acquire the images of the road in front of the vehicle [11]. The control board of the vehicle is a combination of a WiFi module (RN-131C), a BS2 microprocessor, and some circuit to provide the necessary signals for driving two DC motors to rotate the wheels of the vehicle. The webcam is also equipped with a WiFi module to provide image information to the control center via wireless WiFi communication system in real time.

Fig. 1
figure 1

The self-propelled vehicle

2.2 Overall system architecture

The sketch of the overall system is shown in Fig. 2. The overall system is composed of a control center and a vehicle equipped with WiFi module and webcam as mentioned in subsection 2.1. They are all connected via WiFi system, the IEEE 802.11 g, in the proposed system. The webcam sends the image information to the control center via Internet (or some wireless network) same as how the control center sends the control signals to the WiFi module on the vehicle to drive the vehicle. On the other hand, the control center can also release the control priority back to the driver if needed such as engaging an intersection, losing connection signals between control center and the controlled vehicle, or the driver’s request. In the proposed control scheme, the driver always has high priority than the control center.

Fig. 2
figure 2

The sketch of overall system architecture

2.3 Control process of overall system

The road lines are first detected and then drawn on the images transmitted from the webcam by those pre-mentioned image processing methods as shown in the upper part of Fig. 3. The image sent back to the control center first filters out the information useless, e.g., the sky and the lower part of the road.

Fig. 3
figure 3

The sketch of the control process

After binarization of the original image, Canny detection technique and Hough transform are being applied; the road lines thus are being detected [12]. Because the edge lines are multiple, some of them should be eliminated such as those with strange angles. Therefore, the proposed program erases some useless lines. After these processes, the road lines are thus concluded (left: red line, right: green line). The computer program also marks the center spot of the image (blue line) regarded as the center of the vehicle. Then, the distances between the center and the right line and left line are simultaneously calculated and shown on the images. During the running of the vehicle, the proposed control scheme maintains the same constant distances of the left and right as possible which leads the vehicle to run along the road and inside the drivable area. This method is effective especially not only on the straight line but also on the curved line with satisfactory performance in the experimental results. However, the speed of the vehicle will be limited within a multi-curve path because the transmission time delay is crucial in the rapid changing guidance as well as manual driving. The result of the road line detection is illustrated in Fig. 4.

Fig. 4
figure 4

The result of the road line detection

3 Guidance control

The lower part of Fig. 3 gives a block diagram concept of the control process. The navigation of the vehicle depends on a simple algorithm as follows. The control algorithm of the proposed guidance law is shown in Fig. 5. The basic control concept of the proposed control scheme is based on keeping the vehicle inside the drivable region which is decided by the road line detection. Beside the technique of road detection, guidance control scheme according to vanish point is also a popular approach [13, 14]. The methods developed based on the vanish point are especially effective in parallel edges. In the proposed control scheme, the techniques deduced from the vanish point are not concerned, but a more simple method of comparison is developed. First of all, the warning lines are first decided based on the different conditions such as speed, road width, and car width. The proposed design chooses warning lines as half of the width of the remaining space of the road as shown in Fig. 5. After that, the image feedback is analyzed and the variables are thus decided as shown in Fig. 4. The algorithm is shown in Fig. 5. Based on the guidance law as shown also in Fig. 5, the control signals are acquired and sent to the vehicle to vary the direction in order to keep the trail within the drivable area. Both the signals’ packets of control and images are bidirectional transmitted via Internet with the help of WiFi modules. The control center and the vehicle exchange information as a client-server mode in the wireless network is shown in Fig. 5.

Fig. 5
figure 5

The algorithm of guidance law

In Fig. 5, the control process only activates if the connection between control center and the vehicle is successful. Otherwise, the control priority remains in the driver. As the algorithm begins, the vehicle will try to keep its position on the center of the road. In order to respond to the error quicker, a set of warning lines are thus designed to keep the vehicle inside the drivable area. The design of the warning lines can compromise the time delay during the transmission of the newest control signals. The abovementioned algorithm includes the guidance law, server algorithm, and the client algorithm to assure successful guidance.

Figure 6 shows the diagram of the feedback control of the vehicle, in which the reference signal Δ depends on the different conditions, e.g., the car in the next lane and sudden appearance of objects. The reference Δ is zero in the proposed system to guide the vehicle to move along the central line of the road. To deal with the transmission time delay, the proposed system provides a set of warning lines inside the real road lines as shown in Fig. 7 to prevent vehicle from crossing the lines due to some slight delay of the control signals when deviation happens. The algorithm in Fig. 5 is also involved with the concept of the warning lines.

Fig. 6
figure 6

The feedback control of the vehicle

Fig. 7
figure 7

The concept of warning lines

Actually, the warning lines do not appear in the real image. The warning lines were only designed in the program for guidance control of the vehicle.

4 Experimental results

The experimental setup comprises a notebook computer as the control center, an access point as the base station for infrastructure mode, a vehicle, and a paper-made road. Figure 8 demonstrates the physical experimental setup of the proposed control system.

Fig. 8
figure 8

The experimental setup of the proposed system

Two experiments are conducted based on the proposed visual-based vehicle guidance control system. They are described as follows.

4.1 Straight lane case

Figure 9 shows the experimental result of the straight lane case. The line detection is displayed on the upper right part of Fig. 9a in red. The blue line indicates the center of the vehicle and the moving direction. The guidance control law leads the vehicle to go along the road and keep the vehicle inside the road lines. Figure 9b gives the trajectory of the vehicle during [0 20] s. The lines are defined as [−100 100], the center is defined as 0, and the warning lines are defined as [−50 50] to show the effectiveness of the proposed guidance law. Because the vehicle (self-propelled BB car) has some hardware deviation which cannot be calibrated, the vehicle always slightly slides left. Therefore, the trajectories of both Figs. 9b and 10b show some oscillations, but the proposed control scheme is also effective. On the other hand, the response time between visual feedback to the response on the vehicle is about 0.9 s (the average time of 100 times) with about 14 image feedback in 1 s.

Fig. 9
figure 9

Experimental result of straight lane guidance. a Photograph of the experimental result. b The trajectory of the vehicle with road lines

Fig. 10
figure 10

Experimental result of curve lane guidance. a Photograph of the experimental result. b The trajectory of the vehicle with road lines

4.2 Curved lane case

The same as the previous straight lane guidance, the experimental result of curved lane guidance is shown in Fig. 10. The lower right part of the Fig. 10a shows the image of the webcam after line detection. The road lines are marked in red as well as the center of the vehicle is marked in blue. The speed of the vehicle is a little bit slow in the curved lane guidance due to the safety concern. The lines are defined as [−100 100], the center is defined as 0, and the warning lines are defined as [−50 50] to show the effectiveness of the proposed guidance law in Fig. 10b. In the curved road case, the trajectory of the vehicle is trembler than the trajectory in the straight road case.

5 Conclusion

A visual-based vehicle guidance control has been developed in the proposed paper with experiments. First, the image information of the road is acquired by the webcam mounted on the self-propelled vehicle and sent to the control center via wireless network. Then, the road lines are detected after a series of image-processing techniques. Moreover, a set of warning lines are also designed to prevent the vehicle from crossing the road lines. Based on the knowledge from the webcam, a simple guidance control law is also provided to control the vehicle to move inside the drivable area. Two experimental results are also given to show effectiveness of the proposed visual-based vehicle guidance control system with satisfactory performance.

References

  1. A Broggi, M Bertozzi, A Fascioli, C Guarino Lo Bianco, A Piazzi, Visual perception of obstacles and vehicles for platooning. IEEE Trans. Intell. Transp. Syst. 1(3), 164–176 (2000)

    Article  Google Scholar 

  2. AG Mohapatra, Computer vision based smart lane departure warning system for vehicle dynamics control. Sens. Transducers. J. 132(9), 122–135 (2011)

    MathSciNet  Google Scholar 

  3. B Yu, W Zhang, Y Cai, A lane departure warning system based on machine vision, in 2008 IEEE Pacific-Asia Workshop Computational Intelligence Industrial Application. 1, 197–201 (2008). doi:10.1109/PACIIA.2008.142

  4. OO Khalifa, R Islam, A Am Assidi, A-H Abdullah, S Khan, Vision based road lane detection system for vehicles guidance. Aust. J. Basic Appl. Sci. 5(5), 728–738 (2011)

    Google Scholar 

  5. W Li, Human-like driving for autonomous vehicles using vision-based road curvature modeling. Int. J. Hybrid. Inform. Technol. 6(5), 103–116 (2013). doi:10.14257/ijhit.2013.6.5.10

    Article  Google Scholar 

  6. C Kreucher, S Lakshmanan, K Kluge, A driver warning system Based on the LOIS Lane Detection Algorithm. Proc IEEE Int Conference on Intelligent Vehicles, 1998, pp. 17–22

    Google Scholar 

  7. ED Dickmanns, N Muller, Scene recognition and navigation capabilities for lane changes and turns in vision-based vehicle guidance. Control. Eng. Pract. 4(5), 589–599 (1999). doi:10.1016/0967-0661(96)00041-X

    Article  Google Scholar 

  8. KR Liewellyn, Visual guidance of locomotion. J. Exp. Psychol. 91(2), 245–261 (1971). doi:10.1037/h0031788

    Article  Google Scholar 

  9. ED Dickmanns, T Christians, Relative 3D-state estimation for autonomous visual guidance of road vehicle. Robot. Auton. Syst. 7(2–3), 113–123 (1991). doi:10.1016/0921-8890(91)90036-K

    Article  Google Scholar 

  10. Z Hu, K Uchimura, Real-time data fusion on tracking camera pose for direct visual guidance. 2004 IEEE Intelligent Vehicle Symposium, 2004, pp. 842–847. doi:10.1109/VS.2004.1336494

    Google Scholar 

  11. H-T Lee, W-C Lin, C-H Huang, Wireless indoor surveillance robot with a self-propelled patrolling vehicle. J. Robotics. 2011, 9 (2011). doi:10.1155/2011/197105

    Google Scholar 

  12. R Joˇsth, M Dubská, A Herout, J Havel, Real-time line detection using accelerated high-resolution Hough transform, Proc 17th Scandinavian Conference Image. Analysis 6688, 784–793 (2011). doi:10.1007/978-3-642-21227-7_73

    Google Scholar 

  13. B Alberto, M Bertozzi, A Fascioli, C Guarino, L Bianco, A Piazzi, The Argo autonomous vehicle’s vision and control systems. Int. J. Intelligent. Control. Syst. 3(4), 409–441 (1999)

    Google Scholar 

  14. H Cheng, N Zheng, C Sun, H van de Wetering, Vanishing point and gabor feature based multi-resolution on-road vehicle detection. Proc Third Int Conference Advances Neural Networks 3973, 46–51 (2006). doi:10.1007/11760191_7

    Google Scholar 

Download references

Acknowledgements

The author would like to thank Jhe-Yu Guan, Wei-Liang Chen, Shao-Hsuan Hsu, Hao-Hsiang Yang, and Li-Wei Liu for their help in the experimental setup and implementation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hou-Tsan Lee.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, HT. Guidance control of vehicles based on visual feedback via internet. J Wireless Com Network 2015, 155 (2015). https://doi.org/10.1186/s13638-015-0391-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-015-0391-5

Keywords