I am doing automatic calibration of robot and vision: Camera is stationary, detect an object for robot to pick up. I do as follows:
- Step 1: Calibration (or Handeye)
I attach a mark point (like plus sign, arrow,..) to the robot tool. I move the robot in XY direction (3x3) = 9 points, and rotate at least 2 points. At each point I will send the actual TCP coordinates of the robot (XYT) (in mm). Vision detects the mark at each point (Row, col, angle). (T, angle is deg)
=> I have List<VsPoint> visionPoses, List<VsPoint> robotPoses.
if (robotPoses.Count < 9 || visionPoses.Count < 9)
throw new Exception("Need at least 9 calibration points");
// Step 1: Compute affine matrix from Vision to Robot
HTuple rows = new HTuple();
HTuple cols = new HTuple();
HTuple x_mm = new HTuple();
HTuple y_mm = new HTuple();
for (int i = 0; i < 9; i++)
{
rows.Append(visionPoses[i].X); // row
cols.Append(visionPoses[i].Y); // col
x_mm.Append(robotPoses[i].X); //mm
y_mm.Append(robotPoses[i].Y); //mm
}
HOperatorSet.VectorToHomMat2d(rows, cols, x_mm, y_mm, out HTuple homMatImageToWorld);
HOperatorSet.HomMat2dInvert(homMatImageToWorld, out HTuple homMatWorldToImage);
calibrationData = new VsCalibrationData
{
ImageToWorld = homMatImageToWorld,
WorldToImage = homMatWorldToImage,
};
- Step 2: Get Master
I place a sample object and get the Vision master (Row, col, angle) and know the robot TCP coordinates at that position (master pos) (XYT)
- Step 3: Trigger
Now, when placing a product similar to the sample product in step 2 at any position in the camera view area, I want to return the robot offset coordinates so that the robot offsets the master point so that the robot can pick up the product.
public static VsPoint ComputeOffsetFromMaster1(VsCalibrationData calib, VsPoint currentVisionPose, VsPoint refVision, VsPoint refRobot)
{
// Step 1: Convert Master & Current Vision Pose to world coordinates
HOperatorSet.AffineTransPoint2d(calib.ImageToWorld, refVision.X, refVision.Y,
out HTuple xm, out HTuple ym);
HOperatorSet.AffineTransPoint2d(calib.ImageToWorld, currentVisionPose.X, currentVisionPose.Y,
out HTuple xc, out HTuple yc);
double dx = xc.D - xm.D;
double dy = yc.D - ym.D;
double dAngleDeg = currentVisionPose.T - refVision.T;
double dAngleRad = dAngleDeg * Math.PI / 180.0;
// Step 2: Apply rotation + translation to master robot TCP pose
// Rotate around master mark position (xm, ym)
double x0 = refRobot.X;
double y0 = refRobot.Y;
// 1. Translate robot pose relative to master mark
double xt = x0 - xm.D;
double yt = y0 - ym.D;
// 2. Rotate around master mark
double xr = Math.Cos(dAngleRad) * xt - Math.Sin(dAngleRad) * yt;
double yr = Math.Sin(dAngleRad) * xt + Math.Cos(dAngleRad) * yt;
// 3. Translate to new mark position
double xNew = xr + xc.D;
double yNew = yr + yc.D;
// 4. New angle
double tNew = refRobot.T + dAngleDeg;
// Step 3: Return delta offset between original robot TCP and new
return new VsPoint
{
X = xNew - x0,
Y = yNew - y0,
T = dAngleDeg
};
}
When T ~ 0 deg. it can return corect XY. but when T # 0. I know that AffineTransPoint2d returns the other XY which is the value when T = 0 so the result is not accurate. Do you have any solution to return the exact offset value of TCP, integrating accurately even when there is a rotation angle
|