0% found this document useful (0 votes)
22 views

Robotics

The document provides resources for exercises related to robotics including manipulation, navigation, perception, and integration. It outlines a case study on mobile manipulation and visual servoing that involves localizing a robot, planning paths, performing kinematics, edge detection, and detecting objects using RANSAC.

Uploaded by

jiriraymond65
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Robotics

The document provides resources for exercises related to robotics including manipulation, navigation, perception, and integration. It outlines a case study on mobile manipulation and visual servoing that involves localizing a robot, planning paths, performing kinematics, edge detection, and detecting objects using RANSAC.

Uploaded by

jiriraymond65
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Robotics

Lecture 4.1
Manipuation Exercises
1. https://www.ini.rub.de/upload/file/1655995499_af8af7943b72ba7be573/ex6_DoF.pd
f
2. https://motion.cs.illinois.edu/RoboticSystems/Kinematics.html
3. https://opentextbooks.clemson.edu/wangrobotics/chapter/differential-kinematics/
Navigation Exercises
1. https://opendsa-server.cs.vt.edu/embed/DijkstraPE
2. https://dtai.cs.kuleuven.be/education/ai/Exercises/Session2/Solutions/solution.pdf
3. https://www.youtube.com/watch?app=desktop&v=dYeUNdMtBdY
4. https://www.youtube.com/watch?v=cMXApzVcOeY
Perception Exercises
1. https://cvexplained.wordpress.com/2020/04/30/kernels/
2. https://medium.com/@timothy_terati/image-convolution-filtering-a54dce7c786b
Integration Exercises
1. https://www.youtube.com/watch?v=L9KTzZO3C8s
Case Study: Mobile Manipulation and Visual Servo
● We’re going to take a complex robotic solution and break it down into some of the
smaller problems we’d need to solve
● We’ll also use these smaller problems as examples of what you should be able to
do from the theory you’ve learned
Our System
● We have an omni wheel base.
● The base has a 2D lidar with 360
degrees field of view
● Mounted on the base is a 5DOF
robotic arm.
● On the end of the arm is a camera.
● We want to navigate through a
building then press a button on the
wall.
What we start with
● We have a map of the environment.
● We have the button’s location on the map and it’s height off the floor.
● We know the button is square and we know it’s edge length.
● Our base already has a controller, which never rotates, we give it an x-y direction
and speed and it drives the robot in that direction.
● Our arm already has a controller, so we can send it joint velocities.
High Level Plan
● Work out a good base position from which to press the button
● Navigate to that base position
● Move the arm to look at the button
● Detect the button and where it’s center is relative to the arm
● Move the arm in the direction of the button
Subproblems
● Localization ● Determine best arm configuration to
○ Process model look at button
○ Measurement model ○ Determine camera pose relative to button
○ Implement particle filter ○ Inverse kinematics to get joint angles
● Global planning ● Navigation target
○ Which algorithm to use? ○ Forward kinematics to get base position
○ Implement algorithm ○ Transform base relative to map frame

● Local planning ● Looking at button


○ Plan from current configuration to target
○ Which algorithm to use?
arm configuration
○ Configuration values
● Detecting button
○ Tuning parameters
○ Canny edge detection
○ Detect square
Subproblems
● Visual servo ● Integration
○ Get button center position in cartesian ○ Formulate behavior tree
coordinates ○ Build in recovery modes
○ Get desired velocity of end effector
○ Compute jacobian
○ Use pseudoinverse to get joint velocities
We’ll focus on
1. Process model and measurement model
2. Choosing and implementing global planner
3. Forward and inverse kinematics to get navigation goal and look configuration
4. Jacobian to perform velocity control
5. Implementing canny edge detection
6. Implementing RANSAC to detect square
Process Model and Measurement Model
● Process model: We’re looking for some distribution from which we can sample xt
based on xt-1 and ut
○ xt = [Xt, Yt], we omit orientation since we never turn
○ Let us say that we know the controller has some error, ε, which happens to be normally distributed
with zero mean and 0.01 standard deviation.
○ Let us say that our particle will run at 100 Hz
○ ut = [vx,t, vy,t]
○ xt(xt-1, ut) = [Xt-1 + 0.01(vx,t-1+ε), Yt-1 + 0.01(vy,t+ε)]
● Measurement model: We’re looking for some likelihood function from which we
can get the likelihood of some measurement zt from xt
○ We won’t go into the maths of this, but let’s say we find the formula for
p(zt|xt) = f(xt, zt)
Choosing and implementing global planner
● Dijkstra's pros/cons
○ Pros: Guarantees optimal, finds multiple paths at once so reusable
○ Cons: Slower as it searches the whole space
● A star pros/cons
○ Pros: Faster than dijkstra's
○ Cons: Only finds a path from start to goal
● Let’s say that we only intend to drive to this button and press it
● We also have quite a large map
● For this reason, we’ll choose A star, for the computational efficiency
Forward/Inverse kinematics to get nav goal and look configuration
1. We start by fixing coordinate frames
to links of the kinematic chain
2. We then compute the transform
between each pair of adjacent frames
on the chain iTi+1(qi)
3. We then compute the transform
between the base frame and end
effector
0
Ti+1(q) = 0T1 ᐧ 1T2 ᐧ 2T3 ᐧ 3T4 ᐧ 4T5
Forward/Inverse kinematics to get nav goal and look configuration
1. Calculate the Jacobian J(q) by taking
a. The Jacobian is the matrix that takes joint
velocities and transforms them to positional
and rotational velocities [vx, vy, vz, ⍵x, ⍵y, ⍵z]
b. We can split this into Jp and Jr
2. Jp can be calculated by taking the x, y, z
component of 0Ti+1(q) and taking the
partial derivative
3. Jr can be calculated by taking the
rotation velocity created by each joint’s
Z axis and rotating it to the base frame
with the rotational component of
0
Ti+1(q)
Positional and Rotational Jacobian
Jacobian to perform velocity control
● Assuming our perception system can give us a position of the button in camera
frame CP
● We can transform it to end-effector frame 5Ptarget = 5TCCPtarget
● We can then use our forward kinematics to get it to base frame 0Ptarget = 0T5 5Ptarget
● We know the position of our end effector in base frame from our forward
kinematics 0PEE
● We can calculate the vector from our EE to the target 0VEE-target = 0Ptarget - 0PEE
● We can decide some speed we want the gripper to move in that direction: sdes
● Now we can convert this into a velocity: vdes = s 0VEE-target / ||0VEE-target||
● And to move the gripper with that velocity we use tha Jacobian pseudoinverse
dq/dt = J(q)✝vdes
Implementing canny edge detection
● Steps 1 and 2 we are going to filter the image by the Laplacian of the Gaussian to
get the magnitude of the derivative of the filtered image
● Step 3 we are going to threshold the image, since there is a lot of contrast between
edges and non-edges we can set a high threshold
● Step 4 we are going to perform non-maximum suppression to get thinner lines
○ To do this we need to filter by the x-derivative and y-derivative of the Gaussian to get the
directional derivative of the image
○ Then for each pixel we need to make virtual pixels in the direction of that pixels derivative, on
either side
○ We interpolate to get the magnitudes of these virtual pixels, if our candidate pixels magnitude is
higher than the virtual ones, we keep it
● Step 5 we perform hysteresis to remove small line segments
○ We categorize pixels as weak or strong
○ If a weak pixel is not connected to a strong pixel by N neighbours we discard it
Implementing RANSAC to detect the button
● We will perform two RANSAC operations:
a. Using the laser scan to determine where the wall is
b. Using the edges detected from the image to determine where the button is
● We know
a. The button height
b. The button’s pitch and roll relative to the camera
● We want to know
a. The button’s yaw relative to the camera
b. The button’s center position relative to the camera
● Our models are
a. A straight line with parameters m, c
b. An isosceles trapezoid with parameters lL, lR, w
RANSAC

You might also like