Skip to main content

FRC Programming with LimelightLib (WPILib Java & C++)

Usage

This is a single-file library.

Java: Copy the LimelightHelpers.java file from the latest release (https://github.com/LimelightVision/limelightlib-wpijava/releases) into your Java project's "robot" folder.

C++: Copy the LimelightHelpers.h file from the latest release (https://github.com/LimelightVision/limelightlib-wpicpp/releases) into your C++ project's include folder and #include "LimelightHelpers.h" where needed.

You don't need to create any objects for your Limelights - the library is simple and functional to maximize ease of use and reliability.

Tips for Success

  • Start simple! Many successful FRC teams use basic approaches effectively. For example, Team 2056 in 2024 used a standard 90FPS color pipeline instead of neural networks for game piece tracking.
  • Think about what you actually need: Do you need full field localization, or would simple target centering work (e.g., driveSpeed = result.getTx() * 0.03)?

Key Concepts

1. Basic Usage

Every method accepts a Limelight name parameter. Leave it blank or empty to use "limelight":

// Basic targeting data
double tx = LimelightHelpers.getTX(""); // Horizontal offset from crosshair to target in degrees
double ty = LimelightHelpers.getTY(""); // Vertical offset from crosshair to target in degrees
double ta = LimelightHelpers.getTA(""); // Target area (0% to 100% of image)
boolean hasTarget = LimelightHelpers.getTV(""); // Do you have a valid target?

double txnc = LimelightHelpers.getTXNC(""); // Horizontal offset from principal pixel/point to target in degrees
double tync = LimelightHelpers.getTYNC(""); // Vertical offset from principal pixel/point to target in degrees

2. Pipeline Management

Pipelines are like instantly-swappable programs that change how Limelight processes images. You can set up 10 different pipelines in the web interface:

// Switch to pipeline 0
LimelightHelpers.setPipelineIndex("", 0);

3. LED Control

For Limelights with bright illumination LEDs, you can control the LEDs for different situations:

// Let the current pipeline control the LEDs
LimelightHelpers.setLEDMode_PipelineControl("");

// Force LEDs on/off/blink
LimelightHelpers.setLEDMode_ForceOn("");
LimelightHelpers.setLEDMode_ForceOff("");
LimelightHelpers.setLEDMode_ForceBlink("");

4. Field Localization With MegaTag

(See the MegaTag documentation and the Swerve Pose Estimation Tutorial for more details.)

// In your periodic function:
LimelightHelpers.PoseEstimate limelightMeasurement = LimelightHelpers.getBotPoseEstimate_wpiBlue("limelight");
if (limelightMeasurement.tagCount >= 2) { // Only trust measurement if we see multiple tags
m_poseEstimator.setVisionMeasurementStdDevs(VecBuilder.fill(0.7, 0.7, 9999999));
m_poseEstimator.addVisionMeasurement(
limelightMeasurement.pose,
limelightMeasurement.timestampSeconds
);
}

5. Field Localization With MegaTag2

(See the MegaTag2 documentation and the Swerve Pose Estimation Tutorial for more details.)

MegaTag2 enhances localization accuracy by fusing robot orientation data with vision. By providing gyro data, you help MegaTag2 constrain the localization problem and provide excellent results even if only a single tag is visible:

// First, tell Limelight your robot's current orientation
double robotYaw = m_gyro.getYaw();
LimelightHelpers.SetRobotOrientation("", robotYaw, 0.0, 0.0, 0.0, 0.0, 0.0);

// Get the pose estimate
LimelightHelpers.PoseEstimate limelightMeasurement = LimelightHelpers.getBotPoseEstimate_wpiBlue_MegaTag2("");

// Add it to your pose estimator
m_poseEstimator.setVisionMeasurementStdDevs(VecBuilder.fill(.5, .5, 9999999));
m_poseEstimator.addVisionMeasurement(
limelightMeasurement.pose,
limelightMeasurement.timestampSeconds
);

6. Special AprilTag Functionality

// Set a custom crop window for improved performance (-1 to 1 for each value)
LimelightHelpers.setCropWindow("", -0.5, 0.5, -0.5, 0.5);

// Change the camera pose relative to robot center (x forward, y left, z up, degrees)
LimelightHelpers.setCameraPose_RobotSpace("",
0.5, // Forward offset (meters)
0.0, // Side offset (meters)
0.5, // Height offset (meters)
0.0, // Roll (degrees)
30.0, // Pitch (degrees)
0.0 // Yaw (degrees)
);

// Set AprilTag offset tracking point (meters)
LimelightHelpers.setFiducial3DOffset("",
0.0, // Forward offset
0.0, // Side offset
0.5 // Height offset
);

// Configure AprilTag detection
LimelightHelpers.SetFiducialIDFiltersOverride("", new int[]{1, 2, 3, 4}); // Only track these tag IDs
LimelightHelpers.SetFiducialDownscalingOverride("", 2.0f); // Process at half resolution

// Adjust keystone crop window (-0.95 to 0.95 for both horizontal and vertical)
LimelightHelpers.setKeystone("", 0.1, -0.05);

7. Python Integration

Communicate with custom Python SnapScripts:

// Send data to Python
double[] dataToSend = {1.0, 2.0, 3.0};
LimelightHelpers.setPythonScriptData("", dataToSend);

// Get data from Python
double[] dataFromPython = LimelightHelpers.getPythonScriptData("");

8. Getting Detailed Results From NetworkTables (RawTargets)

For maximum performance, you can access raw target data directly from NetworkTables, bypassing JSON parsing:

// Get raw AprilTag/Fiducial data
RawFiducial[] fiducials = LimelightHelpers.getRawFiducials("");
for (RawFiducial fiducial : fiducials) {
int id = fiducial.id; // Tag ID
double txnc = fiducial.txnc; // X offset (no crosshair)
double tync = fiducial.tync; // Y offset (no crosshair)
double ta = fiducial.ta; // Target area
double distToCamera = fiducial.distToCamera; // Distance to camera
double distToRobot = fiducial.distToRobot; // Distance to robot
double ambiguity = fiducial.ambiguity; // Tag pose ambiguity
}

// Get raw neural detector results
RawDetection[] detections = LimelightHelpers.getRawDetections("");
for (RawDetection detection : detections) {
int classID = detection.classId;
double txnc = detection.txnc;
double tync = detection.tync;
double ta = detection.ta;
// Access corner coordinates if needed
double corner0X = detection.corner0_X;
double corner0Y = detection.corner0_Y;
// ... corners 1-3 available similarly
}

9. Limelight 4 IMU Integration

Limelight 4 has a built-in IMU that can be used with MegaTag2 for improved localization during rotation:

// Seed the internal IMU with your external gyro (call while disabled)
LimelightHelpers.SetIMUMode("", 1);

// Switch to internal IMU with external assist when enabled
LimelightHelpers.SetIMUMode("", 4);
LimelightHelpers.SetIMUAssistAlpha("", 0.001); // Adjust correction strength

See the MegaTag2 documentation for a full explanation of IMU modes.

10. Capturing Video with Rewind (LL4 Only)

Control the Rewind feature programmatically:

// Enable/disable rewind recording
LimelightHelpers.setRewindEnabled("", true);

// Capture the last N seconds of video after an event
LimelightHelpers.triggerRewindCapture("", 30.0); // Capture last 30 seconds

11. Capturing Snapshots

Trigger snapshot capture programmatically:

// Trigger a snapshot (rate-limited to once per 10 frames)
LimelightHelpers.triggerSnapshot("");

12. USB Port Forwarding for 3A/3G

Enable web interface and stream access for USB-connected cameras through the RoboRIO:

// Call once in robotInit()
LimelightHelpers.setupPortForwardingUSB(0); // First camera
LimelightHelpers.setupPortForwardingUSB(1); // Second camera (if applicable)

// Access via:
// USB Index 0: http://(robotIP):5801 (UI), http://(robotIP):5800 (stream)
// USB Index 1: http://(robotIP):5811 (UI), http://(robotIP):5810 (stream)

13. Connection Monitoring

Monitor Limelight connection status:

// Heartbeat increments every frame - use to detect disconnection
double heartbeat = LimelightHelpers.getHeartbeat("");

LimelightHelpers Methods Reference

Target Data

MethodDescription
getTVReturns true if a valid target exists
getTXHorizontal offset from crosshair to target (degrees)
getTYVertical offset from crosshair to target (degrees)
getTXNCHorizontal offset from principal pixel to target (degrees)
getTYNCVertical offset from principal pixel to target (degrees)
getTATarget area (0-100% of image)
getTargetCountNumber of targets detected
getHeartbeatHeartbeat value (increments each frame)
getCornerCoordinatesCorner coordinates of detected targets
getRawTargetsRaw contour data in normalized screen space
getRawFiducialsRaw AprilTag detection data
getRawDetectionsRaw neural network detection data

Pose Estimation

MethodDescription
getBotPoseEstimate_wpiBlueMegaTag1 pose estimate (blue origin)
getBotPoseEstimate_wpiRedMegaTag1 pose estimate (red origin)
getBotPoseEstimate_wpiBlue_MegaTag2MegaTag2 pose estimate (blue origin)
getBotPoseEstimate_wpiRed_MegaTag2MegaTag2 pose estimate (red origin)
getBotPose3d_wpiBlueRobot pose as Pose3d (blue origin)
getBotPose3d_wpiRedRobot pose as Pose3d (red origin)
SetRobotOrientationSet robot orientation for MegaTag2

Pipeline & Camera Configuration

MethodDescription
setPipelineIndexSwitch to a specific pipeline
setPriorityTagIDSet priority tag for tx/ty targeting
setLEDMode_PipelineControlLet pipeline control LEDs
setLEDMode_ForceOn/Off/BlinkForce LED state
setStreamMode_Standard/PiPMain/PiPSecondarySet stream layout
setCropWindowSet crop window (-1 to 1)
setCameraPose_RobotSpaceSet camera pose relative to robot

AprilTag Settings

MethodDescription
setFiducial3DOffsetSet 3D offset tracking point
SetFiducialIDFiltersOverrideFilter to specific tag IDs
SetFiducialDownscalingOverrideSet detection downscale factor
setKeystoneApply keystone to crop window

IMU Settings (LL4 Only)

MethodDescription
SetIMUModeSet IMU mode (0-4)
SetIMUAssistAlphaSet complementary filter alpha

Capturing Video with Rewind (LL4 Only)

MethodDescription
setRewindEnabledEnable/disable rewind recording
triggerRewindCaptureCapture last N seconds of video

Snapshot

MethodDescription
triggerSnapshotTrigger snapshot capture (rate-limited)

USB Port Forwarding

MethodDescription
setupPortForwardingUSBConfigure port forwarding for USB cameras

Python Script Interface

MethodDescription
setPythonScriptDataSend data to Python SnapScript
getPythonScriptDataGet data from Python SnapScript

Core Classes

PoseEstimate

Represents a 3D Pose Estimate

Public Properties:

  • pose - The estimated Pose2d
  • timestampSeconds - Timestamp of the estimate
  • latency - Processing latency
  • tagCount - Number of tags used
  • tagSpan - Span between detected tags
  • avgTagDist - Average distance to tags
  • avgTagArea - Average area of tags
  • rawFiducials - Raw fiducial detections
  • isMegaTag2 - Whether this is a MegaTag2 result

RawTarget

Represents raw target/contour results in normalized screen space (-1 to 1).

Public Properties:

  • txnc - Horizontal offset from camera center
  • tync - Vertical offset from camera center
  • ta - Target area (0-100% of image)

RawFiducial

Represents raw AprilTag/Fiducial detection data

Public Properties:

  • id - AprilTag ID number
  • txnc - Horizontal offset from camera center (degrees)
  • tync - Vertical offset from camera center (degrees)
  • ta - Target area (0-100% of image)
  • distToCamera - Distance from camera to target (meters)
  • distToRobot - Distance from robot to target (meters)
  • ambiguity - AprilTag pose ambiguity score

RawDetection

Represents raw neural network detection data

Public Properties:

  • classId - Neural network class ID
  • txnc - Horizontal offset from camera center (degrees)
  • tync - Vertical offset from camera center (degrees)
  • ta - Target area (0-100% of image)
  • corner0_X, corner0_Y through corner3_X, corner3_Y - Corner coordinates