FRC Programming with LimelightLib (WPILib Java & C++)
-
Java: https://github.com/LimelightVision/limelightlib-wpijava
-
JavaDocs: https://limelightlib-wpijava-reference.limelightvision.io
-
C++ Docs: https://limelightlib-wpicpp-reference.limelightvision.io
Usage
This is a single-file library.
Java: Copy the LimelightHelpers.java file from the latest release (https://github.com/LimelightVision/limelightlib-wpijava/releases) into your Java project's "robot" folder.
C++: Copy the LimelightHelpers.h file from the latest release (https://github.com/LimelightVision/limelightlib-wpicpp/releases) into your C++ project's include folder and #include "LimelightHelpers.h" where needed.
You don't need to create any objects for your Limelights - the library is simple and functional to maximize ease of use and reliability.
Tips for Success
- Start simple! Many successful FRC teams use basic approaches effectively. For example, Team 2056 in 2024 used a standard 90FPS color pipeline instead of neural networks for game piece tracking.
- Think about what you actually need: Do you need full field localization, or would simple target centering work (e.g.,
driveSpeed = result.getTx() * 0.03)?
Key Concepts
1. Basic Usage
Every method accepts a Limelight name parameter. Leave it blank or empty to use "limelight":
- Java
- C++
// Basic targeting data
double tx = LimelightHelpers.getTX(""); // Horizontal offset from crosshair to target in degrees
double ty = LimelightHelpers.getTY(""); // Vertical offset from crosshair to target in degrees
double ta = LimelightHelpers.getTA(""); // Target area (0% to 100% of image)
boolean hasTarget = LimelightHelpers.getTV(""); // Do you have a valid target?
double txnc = LimelightHelpers.getTXNC(""); // Horizontal offset from principal pixel/point to target in degrees
double tync = LimelightHelpers.getTYNC(""); // Vertical offset from principal pixel/point to target in degrees
#include "LimelightHelpers.h"
// Basic targeting data
double tx = LimelightHelpers::getTX(""); // Horizontal offset from crosshair to target in degrees
double ty = LimelightHelpers::getTY(""); // Vertical offset from crosshair to target in degrees
double ta = LimelightHelpers::getTA(""); // Target area (0% to 100% of image)
bool hasTarget = LimelightHelpers::getTV(""); // Do you have a valid target?
double txnc = LimelightHelpers::getTXNC(""); // Horizontal offset from principal pixel/point to target in degrees
double tync = LimelightHelpers::getTYNC(""); // Vertical offset from principal pixel/point to target in degrees
2. Pipeline Management
Pipelines are like instantly-swappable programs that change how Limelight processes images. You can set up 10 different pipelines in the web interface:
- Java
- C++
// Switch to pipeline 0
LimelightHelpers.setPipelineIndex("", 0);
// Switch to pipeline 0
LimelightHelpers::setPipelineIndex("", 0);
3. LED Control
For Limelights with bright illumination LEDs, you can control the LEDs for different situations:
- Java
- C++
// Let the current pipeline control the LEDs
LimelightHelpers.setLEDMode_PipelineControl("");
// Force LEDs on/off/blink
LimelightHelpers.setLEDMode_ForceOn("");
LimelightHelpers.setLEDMode_ForceOff("");
LimelightHelpers.setLEDMode_ForceBlink("");
// Let the current pipeline control the LEDs
LimelightHelpers::setLEDMode_PipelineControl("");
// Force LEDs on/off/blink
LimelightHelpers::setLEDMode_ForceOn("");
LimelightHelpers::setLEDMode_ForceOff("");
LimelightHelpers::setLEDMode_ForceBlink("");
4. Field Localization With MegaTag
(See the MegaTag documentation and the Swerve Pose Estimation Tutorial for more details.)
- Java
- C++
// In your periodic function:
LimelightHelpers.PoseEstimate limelightMeasurement = LimelightHelpers.getBotPoseEstimate_wpiBlue("limelight");
if (limelightMeasurement.tagCount >= 2) { // Only trust measurement if we see multiple tags
m_poseEstimator.setVisionMeasurementStdDevs(VecBuilder.fill(0.7, 0.7, 9999999));
m_poseEstimator.addVisionMeasurement(
limelightMeasurement.pose,
limelightMeasurement.timestampSeconds
);
}
// In your periodic function:
LimelightHelpers::PoseEstimate limelightMeasurement = LimelightHelpers::getBotPoseEstimate_wpiBlue("limelight");
if (limelightMeasurement.tagCount >= 2) { // Only trust measurement if we see multiple tags
m_poseEstimator.SetVisionMeasurementStdDevs({0.7, 0.7, 9999999});
m_poseEstimator.AddVisionMeasurement(
limelightMeasurement.pose,
limelightMeasurement.timestampSeconds
);
}
5. Field Localization With MegaTag2
(See the MegaTag2 documentation and the Swerve Pose Estimation Tutorial for more details.)
MegaTag2 enhances localization accuracy by fusing robot orientation data with vision. By providing gyro data, you help MegaTag2 constrain the localization problem and provide excellent results even if only a single tag is visible:
- Java
- C++
// First, tell Limelight your robot's current orientation
double robotYaw = m_gyro.getYaw();
LimelightHelpers.SetRobotOrientation("", robotYaw, 0.0, 0.0, 0.0, 0.0, 0.0);
// Get the pose estimate
LimelightHelpers.PoseEstimate limelightMeasurement = LimelightHelpers.getBotPoseEstimate_wpiBlue_MegaTag2("");
// Add it to your pose estimator
m_poseEstimator.setVisionMeasurementStdDevs(VecBuilder.fill(.5, .5, 9999999));
m_poseEstimator.addVisionMeasurement(
limelightMeasurement.pose,
limelightMeasurement.timestampSeconds
);
// First, tell Limelight your robot's current orientation
double robotYaw = m_gyro.GetYaw();
LimelightHelpers::SetRobotOrientation("", robotYaw, 0.0, 0.0, 0.0, 0.0, 0.0);
// Get the pose estimate
LimelightHelpers::PoseEstimate limelightMeasurement = LimelightHelpers::getBotPoseEstimate_wpiBlue_MegaTag2("");
// Add it to your pose estimator
m_poseEstimator.SetVisionMeasurementStdDevs({0.5, 0.5, 9999999});
m_poseEstimator.AddVisionMeasurement(
limelightMeasurement.pose,
limelightMeasurement.timestampSeconds
);
6. Special AprilTag Functionality
- Java
- C++
// Set a custom crop window for improved performance (-1 to 1 for each value)
LimelightHelpers.setCropWindow("", -0.5, 0.5, -0.5, 0.5);
// Change the camera pose relative to robot center (x forward, y left, z up, degrees)
LimelightHelpers.setCameraPose_RobotSpace("",
0.5, // Forward offset (meters)
0.0, // Side offset (meters)
0.5, // Height offset (meters)
0.0, // Roll (degrees)
30.0, // Pitch (degrees)
0.0 // Yaw (degrees)
);
// Set AprilTag offset tracking point (meters)
LimelightHelpers.setFiducial3DOffset("",
0.0, // Forward offset
0.0, // Side offset
0.5 // Height offset
);
// Configure AprilTag detection
LimelightHelpers.SetFiducialIDFiltersOverride("", new int[]{1, 2, 3, 4}); // Only track these tag IDs
LimelightHelpers.SetFiducialDownscalingOverride("", 2.0f); // Process at half resolution
// Adjust keystone crop window (-0.95 to 0.95 for both horizontal and vertical)
LimelightHelpers.setKeystone("", 0.1, -0.05);
// Set a custom crop window for improved performance (-1 to 1 for each value)
LimelightHelpers::setCropWindow("", -0.5, 0.5, -0.5, 0.5);
// Change the camera pose relative to robot center (x forward, y left, z up, degrees)
LimelightHelpers::setCameraPose_RobotSpace("",
0.5, // Forward offset (meters)
0.0, // Side offset (meters)
0.5, // Height offset (meters)
0.0, // Roll (degrees)
30.0, // Pitch (degrees)
0.0 // Yaw (degrees)
);
// Set AprilTag offset tracking point (meters)
LimelightHelpers::setFiducial3DOffset("",
0.0, // Forward offset
0.0, // Side offset
0.5 // Height offset
);
// Configure AprilTag detection
LimelightHelpers::SetFiducialIDFiltersOverride("", std::vector<int>{1, 2, 3, 4}); // Only track these tag IDs
LimelightHelpers::SetFiducialDownscalingOverride("", 2.0f); // Process at half resolution
// Adjust keystone crop window (-0.95 to 0.95 for both horizontal and vertical)
LimelightHelpers::setKeystone("", 0.1, -0.05);
7. Python Integration
Communicate with custom Python SnapScripts:
- Java
- C++
// Send data to Python
double[] dataToSend = {1.0, 2.0, 3.0};
LimelightHelpers.setPythonScriptData("", dataToSend);
// Get data from Python
double[] dataFromPython = LimelightHelpers.getPythonScriptData("");
// Send data to Python
std::vector<double> dataToSend = {1.0, 2.0, 3.0};
LimelightHelpers::setPythonScriptData("", dataToSend);
// Get data from Python
std::vector<double> dataFromPython = LimelightHelpers::getPythonScriptData("");
8. Getting Detailed Results From NetworkTables (RawTargets)
For maximum performance, you can access raw target data directly from NetworkTables, bypassing JSON parsing:
- Java
- C++
// Get raw AprilTag/Fiducial data
RawFiducial[] fiducials = LimelightHelpers.getRawFiducials("");
for (RawFiducial fiducial : fiducials) {
int id = fiducial.id; // Tag ID
double txnc = fiducial.txnc; // X offset (no crosshair)
double tync = fiducial.tync; // Y offset (no crosshair)
double ta = fiducial.ta; // Target area
double distToCamera = fiducial.distToCamera; // Distance to camera
double distToRobot = fiducial.distToRobot; // Distance to robot
double ambiguity = fiducial.ambiguity; // Tag pose ambiguity
}
// Get raw neural detector results
RawDetection[] detections = LimelightHelpers.getRawDetections("");
for (RawDetection detection : detections) {
int classID = detection.classId;
double txnc = detection.txnc;
double tync = detection.tync;
double ta = detection.ta;
// Access corner coordinates if needed
double corner0X = detection.corner0_X;
double corner0Y = detection.corner0_Y;
// ... corners 1-3 available similarly
}
// Get raw AprilTag/Fiducial data
std::vector<LimelightHelpers::RawFiducial> fiducials = LimelightHelpers::getRawFiducials("");
for (const LimelightHelpers::RawFiducial& fiducial : fiducials) {
int id = fiducial.id; // Tag ID
double txnc = fiducial.txnc; // X offset (no crosshair)
double tync = fiducial.tync; // Y offset (no crosshair)
double ta = fiducial.ta; // Target area
double distToCamera = fiducial.distToCamera; // Distance to camera
double distToRobot = fiducial.distToRobot; // Distance to robot
double ambiguity = fiducial.ambiguity; // Tag pose ambiguity
}
// Get raw neural detector results
std::vector<LimelightHelpers::RawDetection> detections = LimelightHelpers::getRawDetections("");
for (const LimelightHelpers::RawDetection& detection : detections) {
int classID = detection.classId;
double txnc = detection.txnc;
double tync = detection.tync;
double ta = detection.ta;
// Access corner coordinates if needed
double corner0X = detection.corner0_X;
double corner0Y = detection.corner0_Y;
// ... corners 1-3 available similarly
}
9. Limelight 4 IMU Integration
Limelight 4 has a built-in IMU that can be used with MegaTag2 for improved localization during rotation:
- Java
- C++
// Seed the internal IMU with your external gyro (call while disabled)
LimelightHelpers.SetIMUMode("", 1);
// Switch to internal IMU with external assist when enabled
LimelightHelpers.SetIMUMode("", 4);
LimelightHelpers.SetIMUAssistAlpha("", 0.001); // Adjust correction strength
// Seed the internal IMU with your external gyro (call while disabled)
LimelightHelpers::SetIMUMode("", 1);
// Switch to internal IMU with external assist when enabled
LimelightHelpers::SetIMUMode("", 4);
LimelightHelpers::SetIMUAssistAlpha("", 0.001); // Adjust correction strength
See the MegaTag2 documentation for a full explanation of IMU modes.
10. Capturing Video with Rewind (LL4 Only)
Control the Rewind feature programmatically:
- Java
- C++
// Enable/disable rewind recording
LimelightHelpers.setRewindEnabled("", true);
// Capture the last N seconds of video after an event
LimelightHelpers.triggerRewindCapture("", 30.0); // Capture last 30 seconds
// Enable/disable rewind recording
LimelightHelpers::setRewindEnabled("", true);
// Capture the last N seconds of video after an event
LimelightHelpers::triggerRewindCapture("", 30.0); // Capture last 30 seconds
11. Capturing Snapshots
Trigger snapshot capture programmatically:
- Java
- C++
// Trigger a snapshot (rate-limited to once per 10 frames)
LimelightHelpers.triggerSnapshot("");
// Trigger a snapshot (rate-limited to once per 10 frames)
LimelightHelpers::triggerSnapshot("");
12. USB Port Forwarding for 3A/3G
Enable web interface and stream access for USB-connected cameras through the RoboRIO:
- Java
- C++
// Call once in robotInit()
LimelightHelpers.setupPortForwardingUSB(0); // First camera
LimelightHelpers.setupPortForwardingUSB(1); // Second camera (if applicable)
// Access via:
// USB Index 0: http://(robotIP):5801 (UI), http://(robotIP):5800 (stream)
// USB Index 1: http://(robotIP):5811 (UI), http://(robotIP):5810 (stream)
// Call once in RobotInit()
LimelightHelpers::setupPortForwardingUSB(0); // First camera
LimelightHelpers::setupPortForwardingUSB(1); // Second camera (if applicable)
// Access via:
// USB Index 0: http://(robotIP):5801 (UI), http://(robotIP):5800 (stream)
// USB Index 1: http://(robotIP):5811 (UI), http://(robotIP):5810 (stream)
13. Connection Monitoring
Monitor Limelight connection status:
- Java
- C++
// Heartbeat increments every frame - use to detect disconnection
double heartbeat = LimelightHelpers.getHeartbeat("");
// Heartbeat increments every frame - use to detect disconnection
double heartbeat = LimelightHelpers::getHeartbeat("");
LimelightHelpers Methods Reference
Target Data
| Method | Description |
|---|---|
getTV | Returns true if a valid target exists |
getTX | Horizontal offset from crosshair to target (degrees) |
getTY | Vertical offset from crosshair to target (degrees) |
getTXNC | Horizontal offset from principal pixel to target (degrees) |
getTYNC | Vertical offset from principal pixel to target (degrees) |
getTA | Target area (0-100% of image) |
getTargetCount | Number of targets detected |
getHeartbeat | Heartbeat value (increments each frame) |
getCornerCoordinates | Corner coordinates of detected targets |
getRawTargets | Raw contour data in normalized screen space |
getRawFiducials | Raw AprilTag detection data |
getRawDetections | Raw neural network detection data |
Pose Estimation
| Method | Description |
|---|---|
getBotPoseEstimate_wpiBlue | MegaTag1 pose estimate (blue origin) |
getBotPoseEstimate_wpiRed | MegaTag1 pose estimate (red origin) |
getBotPoseEstimate_wpiBlue_MegaTag2 | MegaTag2 pose estimate (blue origin) |
getBotPoseEstimate_wpiRed_MegaTag2 | MegaTag2 pose estimate (red origin) |
getBotPose3d_wpiBlue | Robot pose as Pose3d (blue origin) |
getBotPose3d_wpiRed | Robot pose as Pose3d (red origin) |
SetRobotOrientation | Set robot orientation for MegaTag2 |
Pipeline & Camera Configuration
| Method | Description |
|---|---|
setPipelineIndex | Switch to a specific pipeline |
setPriorityTagID | Set priority tag for tx/ty targeting |
setLEDMode_PipelineControl | Let pipeline control LEDs |
setLEDMode_ForceOn/Off/Blink | Force LED state |
setStreamMode_Standard/PiPMain/PiPSecondary | Set stream layout |
setCropWindow | Set crop window (-1 to 1) |
setCameraPose_RobotSpace | Set camera pose relative to robot |
AprilTag Settings
| Method | Description |
|---|---|
setFiducial3DOffset | Set 3D offset tracking point |
SetFiducialIDFiltersOverride | Filter to specific tag IDs |
SetFiducialDownscalingOverride | Set detection downscale factor |
setKeystone | Apply keystone to crop window |
IMU Settings (LL4 Only)
| Method | Description |
|---|---|
SetIMUMode | Set IMU mode (0-4) |
SetIMUAssistAlpha | Set complementary filter alpha |
Capturing Video with Rewind (LL4 Only)
| Method | Description |
|---|---|
setRewindEnabled | Enable/disable rewind recording |
triggerRewindCapture | Capture last N seconds of video |
Snapshot
| Method | Description |
|---|---|
triggerSnapshot | Trigger snapshot capture (rate-limited) |
USB Port Forwarding
| Method | Description |
|---|---|
setupPortForwardingUSB | Configure port forwarding for USB cameras |
Python Script Interface
| Method | Description |
|---|---|
setPythonScriptData | Send data to Python SnapScript |
getPythonScriptData | Get data from Python SnapScript |
Core Classes
PoseEstimate
Represents a 3D Pose Estimate
Public Properties:
pose- The estimated Pose2dtimestampSeconds- Timestamp of the estimatelatency- Processing latencytagCount- Number of tags usedtagSpan- Span between detected tagsavgTagDist- Average distance to tagsavgTagArea- Average area of tagsrawFiducials- Raw fiducial detectionsisMegaTag2- Whether this is a MegaTag2 result
RawTarget
Represents raw target/contour results in normalized screen space (-1 to 1).
Public Properties:
txnc- Horizontal offset from camera centertync- Vertical offset from camera centerta- Target area (0-100% of image)
RawFiducial
Represents raw AprilTag/Fiducial detection data
Public Properties:
id- AprilTag ID numbertxnc- Horizontal offset from camera center (degrees)tync- Vertical offset from camera center (degrees)ta- Target area (0-100% of image)distToCamera- Distance from camera to target (meters)distToRobot- Distance from robot to target (meters)ambiguity- AprilTag pose ambiguity score
RawDetection
Represents raw neural network detection data
Public Properties:
classId- Neural network class IDtxnc- Horizontal offset from camera center (degrees)tync- Vertical offset from camera center (degrees)ta- Target area (0-100% of image)corner0_X,corner0_Ythroughcorner3_X,corner3_Y- Corner coordinates