Visiual Approch
Visiual Approch
Sriram Vishwanath
WNCG
June 2017
Data-Supported Transportation Operations & Planning Center
(D-STOP)
A Tier 1 USDOT University Transportation Center at The University of Texas at Austin
DISCLAIMER
The contents of this report reflect the views of the authors, who are responsible for the facts and the
accuracy of the information presented herein. This document is disseminated under the sponsorship of
the U.S. Department of Transportation’s University Transportation Centers Program, in the interest of
information exchange. The U.S. Government assumes no liability for the contents or use thereof.
Technical Report Documentation Page
1. Report No. 2. Government Accession No. 3. Recipient's Catalog No.
D-STOP/2017/105
4. Title and Subtitle 5. Report Date
Self-Parking for Semi-Autonomous Vehicles June 2017
6. Performing Organization Code
16. Abstract
This project focuses on the use of tools from a combination of computer vision and localization based
navigation schemes to aid the process of efficient and safe parking of vehicles in high density parking
spaces. The principles of collision avoidance, simultaneous localization and mapping together with vision
based actuation in robotics will be used to enable this functionality.
Acknowledgements
The authors recognize that support for this research was provided by a grant from the
U.S. Department of Transportation, University Transportation Centers.
Self-Parking for Semi-Autonomous
Vehicles
Objective
The objective of the self-parking project was to provide a testbed on which
individual self-parking techniques and algorithms for coordination could be
developed and tested. The overall project consisted of three main stages:
The Pharos testbed is composed of robots called Proteus III. Proteus III is a robot
designed for research that brings vehicular mobility/control and communications
together. What sets Proteus apart from most other robotic research platforms is
its modular architecture and development-friendly design. Proteus provides a
powerful base platform while allowing for the essential customization and
expansion of the robot for whatever specific research application it must serve. In
order to maintain simplicity and intuition with such diverse and dynamic
capabilities, great care has been taken in designing the robot.
A key feature of the Pharos testbed is the mobility capabilities of the Proteus III
robots. Mobility is exceedingly hard to model accurately, and thus testbeds are
invaluable in mobile network communications research. Theoretical bounds and
numerical simulations can provide insight into real-world behavior, but each of
these has their respective weaknesses. Particularly, when communication and
vehicular behavior become dependent on one another, it is near-impossible to
model every aspect of this interaction. Thus, a testbed is needed, both as a
mechanism to test new techniques and to provide feedback to the design process
of the impact of mobility on communication performance metrics. Pharos has
previously been used in several communication network scenarios including
network coding, delay tolerant networks, multi-robot patrol, and autonomous
intersections.
Proteus Control Interface
The Proteus III robots have a computational plane that consists of an x86
Computer (Via EPIA N800-10E) with a WiFi Network interface. The
computational plane runs ROS (Robot Operating System) which provides a
platform for control of each individual robot and coordination between multiple
robots. We briefly highlight the capabilities of the Proteus Control Interface
relevant to this work.
Mobility: The Proteus robots have mobility capabilities via a Traxxas Stampede
chassis controlled by an Arduino micro-controller. The Traxxas chassis gives the
Proteus robot robustness in outdoor environments and the motor provides a
range of speeds for different mobility scenarios.
Creating a heuristic that does not require extraction of fields required a more
complex arithmetic solution that takes advantage of the integer representation,
overflow bits, and an additional constraint that both dimensions are equivalent.
The visual approach uses a camera to scan for a parking space. Once a parking
lot is detected, a path finding algorithm is used to select an efficient path to
navigate into a spot. Finally, the pathfinding algorithm is executed by
publishing the appropriate angle and speed commands while monitoring
distance.
Searching for a spot: The first step using the visual approach is to look for a
parking space. In our case, we used a colored perimeter and a marker on the
floor to denote a parking space. Next, the robot finds a path using the current
location and the target location. We use a variation of the A* algorithm that is
able to find paths in a couple of seconds. Finally, after a path to follow has been
found, the node sends information to the semi-autonomous vehicle to execute
the desired movements.
NOTE: Further details of the visual approach can be found in the presentation
attached.
Figure 1 shows the configuration of the Proteus robot when using the Visual
Approach.
Range Approach
The Range Approach uses ultrasound range finders to address situations where
more precision is required. In particular it prevents physical collisions with
other objects around the desired spot. The module developed for the range
approach identifies obstacles, edges, and empty spaces that are large enough to
park using the ultrasound range finder. Once space is identified, the robot turns
into the space at low speed while monitoring distance to obstacles around it.
The robot situates itself equidistant from obstacle at either side and stops 30 cm
away from front obstacle. The algorithmic framework here identifies obstacles,
edges, and empty spaces that are large enough to park using the ultrasound
range finder Once space is identified, the robot turns into the space at low speed
while monitoring distance to obstacles around it. Overall, the robot situates itself
equidistant from obstacle at either side and stops 30 cm away from front
obstacle.
The first step in the range approach is to find edges and the shape of the space
In this beginning phase, the robot is assumed to be perpendicular to the parking
spot, and so it must turn into the spot. Once the robot has entered the spot, it
slowly parks itself while avoiding collisions with neighboring obstacles
The robot parks by aligning itself equidistant to both sides, straightening, and
stopping 30 cm from the front obstacle.
GPS/Compass Navigation
Both the Visual and Range Approaches
described above concentrated in getting the
robot into the parking spot. In order to get to
the parking spot, we equipped the Proteus
The code for these three ROS nodes can be found in:
• GPS: https://github.com/pesantacruz/utexas-ros-
pkg/tree/experimental/stacks/pharos/proteus3_gps_hydro
• Compass: https://github.com/pesantacruz/utexas-ros-
pkg/tree/experimental/stacks/pharos/proteus3_compass_hydro
• Navigation: https://github.com/pesantacruz/utexas-ros-
pkg/tree/experimental/stacks/pharos/proteus3_navigation
Pharos Lab
01/16/2015
Project overview
• Large amounts of space is underutilized in parking
lots due to the need for navigation space
• Path is executed by
publishing the appropriate
angle and speed commands
while monitoring distance
Visual approach
• The visual approach uses a
USB webcam to scan for a
parking space
• Find a path
A* modification
Visual Approach
• Execute path