Software Change Log
Contact us or post to CD to suggest upgrades for Limelight!
Limelight OS 2024.9.1 (7/7/24)
- The Map Builder Tool now accepts/converts WPILib .json apriltag layouts
- Add AprilTag3 to Python Snapscripts (from apriltag import apriltag)
- See example in examples github repo
- Fix USB connectivity gateway issue on Windows.
Limelight OS 2024.9 (7/5/24)
MegaTag Upgrades
-
Limelight OS has transitioned to NetworkTables 4.0
-
MegaTag2 now uses NT4's getAtomic() to retrieve timestamped IMU updates from the roboRIO.
-
Our timestamped image frames are matched to the two most relevant IMU samples before interpolation is performed.
-
NT4 flush() has been added to LimelightLib. Adding Flush() to older versions of Limelight OS will get you quite close to 2024.9 performance, but NT4 ensures accuracy is always high.
-
The MT2 visualizer robot now has green bumpers, and MT1's visualizer robot uses yellow bumpers.
-
Metrics are now collapsible, and the virtual robots can be hidden.
-
The following video demonstrates how 2024.9's MegaTag 2 (green robot) with robot-side flush() is more robust than 2024.5's MegaTag2 without Flush() (red robot)
USB ID and New USB IP Addresses
- Set the "USB ID" in the settings page to use multiple USB Limelights on any system.
- The USB-Ethernet interface that appears on your system will utilize an IP address that is determined by the USB ID
- Linux/Android/Mac Systems will now utilize the 172.29.0.0/24 subnet by default
- Windows systems will now utilize the 172.28.0.0/24 subnet by default.
- If the USBID is set, the subnet changes to 172.29.(USBID).0/24 for Linux/Android/Mac and 172.28.(USBID).0/24 for Windows.
- You can now, for example, attach four Limelight devices to a single USB Hub by adjusting their hostnames and USB IDs
CPU Neural classifiers
- Upload a CPU .tflite classifier to enable neural classification without Google Coral. You can expect 15-18 FPS on LL3 variants.
- 2024.9 comes with a default CPU classifier.
- Set the classifier runtime to "CPU" to enable this feature
CPU Neural detectors
- Upload a CPU .tflite detector to enable neural detecyion without Google Coral. You can expect 10 FPS on LL3 variants.
- 2024.9 comes with a default CPU detector.
- Set the detector runtime to "CPU" to enable this feature
Limelight OS 2024.8 (7/3/24)
- Add python output (PythonOut), tx, ty, txnc, tync, ta to json results object
- Further improved MT2 latency compensation
Limelight OS 2024.7 (5/21/24)
- Upgrade to Linux 6.6
Bugfixes
- Fix vision pipeline conversion
- Fix calibration uploads, snapshot uploads, and nn uploads
Limelight OS 2024.6 (5/8/24)
LimelightLib Python
- pip install limelightlib-python
- Our Python library allows you to interact with USB and Ethernet Limelights on any platform.
- It allows for complete Limelight configuration without web UI interaction.
- Upload pipelines, neural networks, field maps, etc
- Make realtime changes to any pipeline parameter with an optional "flush to disk" option
- Post custom python input data, set robot orientation, etc.
MegaTag2 Upgrades
- MegaTag2 Gyro latency compensation been improved. Look out for more improvements soon!
- Add "Gyro latency adjustment" slider to the UI. To manually tune MegaTag 2 latency compensation, you can spin your robot and adjust the slider until localization results are perfect while rotating.
Standard Deviation Metrics
- The 3D Field visualizer now includes MegaTag1 and Megatag2 standard deviations for x, y, and yaw.
New "Focus" Pipeline Type
- While in "focus" mode, you will have access to a stream quality slider and a crop box slider
- Spin the lens to maximize the "focus" score.
- If your camera is in a fixed location, this takes less than one minute. We recommend focusing with a fixed / mounted Limelight.
New "Barcodes" Pipeline Type
- 50-60FPS Multi QR Code Detection and Decoding at 1280x800
- 50-60FPS Multi DataMatrix Detection and Decoding at 1280x800
- 30FPS Multi UPC, EAN, Code128, and PDF417 at 1280x800
- Barcode data strings are posted to the "rawbarcodes" nt array.
- The Barcodes pipeline will populate all 2D metrics such as tx, ty, ta, tcornxy, etc.
All-New REST API
- https://docs.limelightvision.io/docs/docs-limelight/apis/rest-http-api
- Our REST / HTTP API has been rebuilt from the ground up.
- The REST API allows for complete Limelight configuration without web UI interaction.
- Upload pipelines, neural networks, field maps, etc
- Make realtime changes to any pipeline parameter with an optional "flush to disk" option
- Post python input data, set robot orientation, etc.
Remove Camera Orientation Setting From UI (BREAKING CHANGE)
- This has been replaced by the "stream orientation" option. Calibration and targeting are never affecting by this option.
- The new option only affects the stream. Upside-down, 90 Degree Clockwise, 90 Degree Counter-Clockwise, Horizontal Mirror, and Vertical Mirror
- Teams will now need to manually invert tx and ty as required while using rotated cameras.
Remove GRIP Support (BREAKING CHANGE)
Remove "Driver" zero-processing mode (BREAKING CHANGE)
- This has been replaced by the "Viewfinder" pipeline type
Add "Viewfinder" Pipeline type
- The viewfinder pipeline disables all processing for minimal latency
- This allows teams to design their own "Driver" pipelines for view-only modes
Pipeline Files now Use JSON format (BREAKING CHANGE)
- Pipelines still use the .vpr file extension
- (Broken in some cases in 2024.6) The UI will auto-convert pipelines to JSON when you use the "upload" button.
- (Fully functional) You may also https://tools.limelightvision.io/pipeline-upgrade to upgrade your pipelines
Calibration UX Improvement
- Calibration settings are now cached. You no longer need to enter your calibration settings every time you want to calibrate.
- The default calibration dictionary has been updated to work with the recommended 800x600mm coarse board from Calib.io.
Calibration Mosaic
- Previously, it was difficult to determine the quality of calibration images
- The calibration tab now has a "Download Calibration Mosaic" button. The mosaic will show you exactly what each image is contributing to your calibration.
"Centroid" targeting region
- Centroid targeting mode has been added to the "Output" tab to improve object tracking with color pipelines
Dynamic 3D Offset (NT: fiducial_offset_set)
- It is now possible to adjust the 3D Offset without changing pipelines. This is useful for situations in which your "aim point" needs to change based on distance or other properties.
Add Modbus Support
- Limelight OS now has an always-on modbus server for inspection, logistics, and industrial applications
- See the modbus register spec here: https://docs.limelightvision.io/docs/docs-limelight/apis/modbus
- The default modbus server port may be changed in the UI's settings tab
- Through modbus and snapscript python pipelines, completely custom vision applications with bi-directional communication are now supported.
Custom NT server
- The settings tab now contains an entry for a custom NT server.
- This enables a new workflow which includes a glass NT server running on a PC, and Limelight 3G communicating over USB.
Rawfiducial changes
- The "area" value of raw fiducials is now a calibrated, normalized value ranging from ~0-1
All NetworkTables and JSON Changes
-
Add NT getpipetype - Get the current pipeline type string (eg pipe_color, pipe_fiducial)
-
Add NT tcclass - Classifier pipeline detected class name
-
Add NT tdclass - Detector pipeline detected class name
-
Add NT t2d for guaranteed atomic 2d targeting - [valid,targetcount, targetlatency, capturelatency, tx, ty, txnc, tync, ta, targetid, classifierID, detectorID, tlong, tshort, thor, tvert, ts(skew)]
-
Remove NT tlong, tshort, thor, tvert, and ts
-
Add NT 'crosshairs' array [cx0,cy0,cx1,cy1]
-
Remove NT cx0, cy0, cx1, and cy1
-
Add NT rawbarcodes - NT String Array of barcode data. Up to 32 entries.
-
All "raw" arrays allow for up to 32 targets (up from 8)
-
Add fiducial_offset_set dynamic 3d Offset setter
-
Add "pType" to json top-level result
-
Add "stdev_mt1" and "stdev_mt2" to json top-level result (x,y,z,roll,pitch,yaw) (meters, degrees)
Changes to Other File Formats and JSON Dumps
- The calibration file format has been simplified. Old calibrations are auto-converted to the new format upon upload
- One layer of nesting has been removed from Results and Status JSON dumps
Bugfixes
- Previously, if a Google Coral was unplugged while a Neural pipeline was active, the pipeline would permanently revert to "color/retro" mode
- Now, "CHECK CORAL" or "CHECK MODEL" will be printed to the image. The pipeline type will never change
- Previously, tags that successfully passed through the fiducial ID filter were sometimes drawn with a red outline instead of a green outline. This visualization problem has been fixed.
- Apriltag pipelines populate the tcornxy NT array
- Apriltag pipelines now fully respect the min-max area slider. Previously, AprilTag pipelines would filter 2D results based on Tag Area, but not 3D / Localization Results.
Limelight OS 2024.5.0 (4/9/24)
- Upgrade to Linux 6.1
Camera Stack Update
- The entire camera stack has been updated to fix a camera peripheral lock-up on Limelight3G.
- Symptoms include
- Be sure to retune exposure and gain settings after applying this update.
Dynamic Downscaling
- Teams may now set "fiducial_downscale_set" to override the current pipeline's downscale setting
- 0:UI control, 1:1x, 2:1.5x, 3:2x, 4:3x, 5:4x
- Use the new Helpers method with 0.0 (UI Control), 1.0, 1.5, 2.0, 3.0, 4.0
- This is a zero-overhead operation.
- By combining dynamic downscale and dynamic crop, teams can maximize FPS without managing multiple pipelines
MegaTag2 Improvements
- MT2 now works no matter the Limelight orientation, including "portrait" modes with 90 degree and -90 degree rolls
"rawdetections" nt array
- [classID, txnc, tync, ta, corner0x, corner0y, corner1x, corner2y, corner3x, corner3y, corner4x, corner4y]
- corners are in pixel-space without calibration applied
Erode/Dilate Update
- Color pipelines now support up to 10 steps of dilation and 10 steps of erosion
- Color pipelines now have a "reverse morpho" option to reverse the order of the dilation and erosion steps
LimelightLib 1.6 (4/9/24)
- Add void SetFiducialDownscalingOverride(float downscale)
Set to 0 for pipeline control, or one of the following to override your pipeline's downscale setting: 1, 1.5, 2, 3, 4
- Add RawFiducial[] GetRawFiducials()
- Add RawDetection[] GetRawDetections()
Limelight OS 2024.4.0 (4/3/24)
Thanks to all of the teams who contributed ideas for this update.
Megatag 2
Megatag 2 is an ambiguity-free localizer. It has higher accuracy and higher precision than Megatag1, and it was built with the following requirements:
- Eliminate the pose ambiguity problem and increase robustness against image/corner noise.
- Provide excellent pose estimates given one or more tags, no matter the perspective.
- Increase robustness against physical AprilTag placement inaccuracies
- Reduce the amount of robot-side filtering necessary for good pose estimation results
Notice the difference between MegaTag2 (red robot) and Megatag (blue robot) in this highly ambiguous single-tag case:
Megatag2 requires you to set your robot's heading with a new method call. Here's a complete example:
LimelightHelpers.SetRobotOrientation("limelight", m_poseEstimator.getEstimatedPosition().getRotation().getDegrees(), 0, 0, 0, 0, 0);
LimelightHelpers.PoseEstimate mt2 = LimelightHelpers.getBotPoseEstimate_wpiBlue_MegaTag2("limelight");
if(Math.abs(m_gyro.getRate()) > 720) // if our angular velocity is greater than 720 degrees per second, ignore vision updates
{
doRejectUpdate = true;
}
if(mt2.tagCount == 0)
{
doRejectUpdate = true;
}
if(!doRejectUpdate)
{
m_poseEstimator.setVisionMeasurementStdDevs(VecBuilder.fill(.6,.6,9999999));
m_poseEstimator.addVisionMeasurement(
mt2.pose,
mt2.timestampSeconds);
}
Megatag2 provides excellent, ambiguity-free results at any distance given a single tag. This means it is perfectly viable to focus only on tags that are relevant and within your desired placement tolerance. If a tag is not in the correct location or irrelevant, filter it out with the new dynamic filter feature.
Dynamic Apriltag Filtering
- Because MegaTag2 is not desperate to accumulate as many AprilTags as possible, you can safely filter for well-placed and relevant tags:
int[] validIDs = {3,4};
LimelightHelpers.SetFiducialIDFiltersOverride("limelight", validIDs);
Transitioning to MegaTag2
Megatag2 requires your robot heading to work properly. A heading of 0 degrees, 360 degrees, 720 degrees, etc means your robot is facing the red alliance wall. This is the same convention used in PathPlanner, Chorero, Botpose, and Botpose_wpiblue.
Once you have added SetRobotOrientation() to your code, check the built-in 3D visualizer. At close range, Megatag2 and Megatag1 should match closely if not exactly. At long range, Megatag 2 (red robot) should be more accurate and more stable than Megatag1 (blue robot).
Once the built-in visualizer is showing good results, you can safely use Megatag2 to guide your robot during the autonomous period.
The only filter we recommend adding is a "max angular velocity" filter. You may find that at high angular velocities, your pose estimates become slightly less trustworthy.
The examples repo has a Megatag2 example with this filter.
if(Math.abs(m_gyro.getRate()) > 720) // if our angular velocity is greater than 720 degrees per second, ignore vision updates
{
doRejectUpdate = true;
}
if(mt2.tagCount == 0)
{
doRejectUpdate = true;
}
LimelightLib 1.5 (4/3/24)
Add
getBotPoseEstimate_wpiRed_MegaTag2()
getBotPoseEstimate_wpiBlue_MegaTag2()
SetRobotOrientation()
Limelight OS 2024.3.4 (3/20/24)
Thanks to all of the teams who contributed ideas for this update.
Higher-Precision Single Tag Solver
MegaTag's single tag 3D solver has been improved. It is far more stable than before at long range.
JSON Disabled by Default (Breaking Change)
- JSON has been disabled by default to reduce bandwidth usage and across the board for teams using auto-subscribing dashboards such as Shuffleboard.
- This should also reduce RoboRIO NT load and CPU usage.
- Re-enable json per-pipeline in the output tab.
- This updates includes changes that should allow even more teams to transition away from JSON for pose estimation.
Undistorted Area (Breaking Change)
Corners are undistorted before computing the area of any target.
Include Per-Fiducial Metrics in botpose, botpose_wpiblue, and botpose_wpired
[tx, ty, tz, roll, pitch, yaw, tagCount, tagSpan (meters), averageDistance (meters), averageArea (percentage of image), (tags) ]
For every tag used by megatag localization, the above arrays now include (tagID, txnc, tync, ta, distanceToCamera, distanceToRobot, ambiguity)
Ambiguity is a new metric ranging from 0-1 that indicates the ambiguity of the current perspective of the tag. Single-tag-updates with tag ambiguities > .9 should probably be rejected.
"rawtargets" and "rawfiducials" nt arrays (Breaking Change)
- rawtargets - (txnc,tync,ta) per target
- rawfiducials - (tagID, txnc, tync, ta, distanceToCamera, distanceToRobot, ambiguity) per target
- The previous rawtargets NT entries (tx0,ty0, etc) have been removed.
Bugfixes
- Zero-out all single-tag 3D information if the priorityID has not been found. Previously, only Tx, Ta, Ty, and Tv were zeroed-out when the priorityTag was not found
- Zero-out botpose if the only visible tag has been filtered by the UI's "ID Filters" features. Previously, botposes would reset to the center of the field rather than (0,0,0) if the only visible tag was a filtered tag;
- 2024.2 would post NANs to certain networktables entries in some rare instances. This will no longer happen.
LimelightLib 1.4 (3/21/24)
- Add support for 2024.3.4 Raw Fiducials. PoseEstimates now include an array of rawFiducials which contain id, txnc, tync, ta, distanceToCamera, distanceToRobot, and ambiguity
Limelight Hardware Manager 1.4 (3/18/24)
Bugfix
Disovered USB Limelights are properly displayed as a single entry rather than two partial entries.
Limelight OS 2024.2.2 (3/17/24)
Bugfix
TX and TY properly respect the crosshair in NT entries.
Limelight OS 2024.2 (3/8/24)
Zero-Crosshair targeting with Json (tx_nocross, ty_nocross) and NT (txnc, tync)
If you are using tx/ty targeting with custom intrinsics calibration, you are likely still seeing camera-to-camera variation because the Limelight crosshair is not aligned with the principal pixel of the camera. Teams that require greater tx/ty accuracy can either configure the crosshair to match the principal pixel, or use these new metrics.
Potentially breaking change in tx/ty
A bug was introduced earlier this season that broke custom calibration specifically for tx, ty, and tx + ty in json. Limelight OS was reverting to default calibrations in several cases.
Calibration Upgrades
Calibration is now nearly instantaneous, now matter how many images have been captured. We've also fixed a crash caused by having more than around 30 images under certain circumstances.
We're consistently getting a reprojection error of around 1 pixel with 15-20 images of paper targets, and an error of .3 pixels with our high-quality calib.io targets.
Fiducial Filters UI Fix
Fiducial filter textbox now accepts any number of filters.
Misc
Apriltag Generator defaults to "no border" to prevent scaling with 165.1 mm tags.
Limelight OS 2024.1.1 (2/24/24)
- Fix priorityID
Limelight OS 2024.1 (2/24/24)
HW Metrics (hw key in networktables, /status GET request)
- Teams now have the ability to log FPS, CPU Load, RAM usage, and CPU Temp.
- Addresses https://github.com/LimelightVision/limelight-feedback/issues/5
Calibration Improvement
- Fix crash that could occur if a calibration image contained exactly one valid detection. Improve web ui feedback.
Robot Localization Improvement (tag count and more)
-
All networktables botpose arrays (botpose, botpose_wpiblue, and botpose_wpired) now include Tag Count, Tag Span (meters), Average Distance (meters), and Average Area (percentage of image)
-
These metrics are computed with tags that are included in the uploaded field map. Custom and/or mobile AprilTags will not affect these metrics.
-
With device calibration and this botpose array upgrade, we do not believe JSON is necessary for the vast majority of use-cases this year.
-
JSON dump now includes botpose_avgarea, botpose_avgdist, botpose_span, and botpose_tagcount for convenience.
[tx,ty,tz,rx,ry,rz,latency,tagcount,tagspan,avgdist,avgarea]
New Feature: Priority ID (NT Key priorityid)
-
If your robot uses both odometry-based features and tx/ty-based features, you've probably encountered the following UX problem:
-
Before this update, there was no way to easily switch the preferred tag ID for tx/ty targeting.
-
While there is an ID filter in the UI, it
- is not dynamic
- removes tags from megaTag localization.
-
This meant teams were creating several pipelines: one for 3D localization, and one per tx/ty tag (one pipeline for blue-side shooting with tag 7, one for blue-side amping with tag 6, etc.).
-
The new priority ID feature (NT Key priorityid) allows you to tell your Limelight "After all tag detection, filtering, and sorting is complete, focus on the tag that matches the priority ID."
-
This does not affect localization in any way, and it only slightly changes the order of tags in JSON results.
-
If your priority id is not -1, tx/ty/ta will return 0 unless the chosen tag is visible.
Misc
- Fix "x" across the screen while using dual-target mode in a 3D apriltag pipeline
- REST API expanded with neural network label uploads (/uploadlabels)
- Include device nickname in /status json
LimelightLib 1.3
- LimelightLib (Java and CPP) have been updated to make localization easier than ever.
LimelightHelpers.PoseEstimate limelightMeasurement = LimelightHelpers.getBotPoseEstimate_wpiBlue("limelight");
if(limelightMeasurement.tagCount >= 2)
{
m_poseEstimator.setVisionMeasurementStdDevs(VecBuilder.fill(.7,.7,9999999));
m_poseEstimator.addVisionMeasurement(
limelightMeasurement.pose,
limelightMeasurement.timestampSeconds);
}
New resources for Teams
Limelight Feedback and Issue Tracker: https://github.com/LimelightVision/limelight-feedback/issues
Examples Repo: https://github.com/LimelightVision/limelight-examples
Aiming and Ranging with Swerve Example: https://docs.limelightvision.io/docs/docs-limelight/tutorials/tutorial-swerve-aiming-and-ranging
MegaTag Localization Example: https://docs.limelightvision.io/docs/docs-limelight/tutorials/tutorial-swerve-pose-estimation
Thanks to recent contributors jasondaming, Gold876, JosephTLockwood, Andrew Gasser, and virtuald
Limelight 2024 Updates (2/6/24)
Limelight Documentation Upgrade
- The documentation has been rewritten to streamline the setup process
Limelight AprilTag Generator
- https://tools.limelightvision.io/ now features the first-ever online AprilTag generator.
- Select your paper size, marker size, and tag IDs to generate a printable PDF.
- Safari may not properly display tags at the moment.
Limelight Map Builder
- https://tools.limelightvision.io/map-builder
- You can now build custom AprilTag maps with an intuitive UI.
- The default family and tag size have been updated to match the 2024 field.
New Hardware Manager
- The Finder Tool is now the Limelight Hardware Manager
- It has been rewritten from scratch. It now reliably detects Limelights, provides more useful diagnostic information, and does not require restarts to work properly.
- Get it now from the downloads page
Train your own Neural Networks
- You can train your very own detection models for free with RoboFlow, the Limelight Detector Training Notebook, and our new tutorial
2024 AprilTag Map and Note Detector
- The map and detector model have been added to the downloads page and the latest Limelight OS image.
Limelight OS 2024.0 (2/6/24)
ChArUco Calibration Fixes
- Our ChArUco detector's subpixel accuracy has been increased. A reprojection error of 1-2 pixels is now achievable with clipboard targets and 20 images.
- Using the same camera and the same target, 2023.6 achieved an RPE of 20 pixels, and 2024.0 achieved an RPE of 1.14 pixels.
- Input fields no longer accept letters and special characters. This eliminates the potential for a crash.
Out-Of-The-Box Megatag Accuracy Improvement
- Before this update, Limelight's internal Megatag map generator referenced the UI's tag size slider instead of the tag sizes supplied by the .fmap file.
- Megatag now respects the tag sizes configured in fmap files and ignores the size slider.
- If your size slider has not been set to 165.1 mm, you will notice an immediate improvement in localization accuracy
Performance Upgrades and Bugfixes
- Higher FPS AprilTag pipelines
- The performance of the Field-Space Visualizer has been significantly improved.
Bugfixes
- Apriltags in 3D visualizers were sometimes drawn with incorrect or corrupted tag images. Tags are now always displayed correctly.
- "v" / tv / "valid" will now only return "1" if there are valid detections. Previously, tv was always "1"
2023.6 (4/18/23)
Easy ChArUco Calibration & Calibration Visualizers
- ChArUco calibration is considered to be better than checkerboard calibration because it handles occlusions, bad corner detections, and does not require the entire board to be visible. This makes it much easier to capture calibration board corners close to the edges and corners of your images. This is crucial for distortion coefficient estimation.
- Limelight’s calibration process provides feedback at every step, and will ensure you do all that is necessary for good calibration results. A ton of effort has gone into making this process as bulletproof as possible.
- Most importantly, you can visualize your calibration results right next to the default calibration. At a glance, you can understand whether your calibration result is reasonable or not.
- You can also use the calibration dashboard as a learning tool. You can modify downloaded calibration results files and reupload them to learn how the intrinsics matrix and distortion coefficients affect targeting results, FOV, etc.
- Take a look at this video:
2023.5.1 & 2023.5.2 (3/22/23)
-
Fixed regression introduced in 2023.5.0 - While 2023.5 fixed megatag for all non-planar layouts, it reduced the performance of single-tag pose estimates. This has been fixed. Single-tag pose estimates use the exact same solver used in 2023.4.
-
Snappier snapshot interface. Snapshot grid now loads low-res 128p thumbnails.
-
Limeilght Yaw is now properly presented in the 3d visualizers. It is ccw-positive in the visualizer and internally
-
Indicate which targets are currently being tracked in the field space visualizer
2023.5.0 (3/21/23)
Breaking Changes
- Fixed regression - Limelight Robot-Space "Yaw" was inverted in previous releases. Limelight yaw in the web ui is now CCW-Positive internally.
Region Selection Update
- Region selection now works as expected in neural detector pipelines.
- Add 5 new region options to select the center, top, left, right, top, or bottom of the unrotated target rectangle.
"hwreport" REST API
- :5807/hwreport will return a JSON response detailing camera intrinsics and distortion information
MegaTag Fix
- Certain non-coplanar apriltag layouts were broken in MegaTag. This has been fixed, and pose estimation is now stable with all field tags. This enables stable pose estimation at even greater distances than before.
Greater tx and ty accuracy
- TX and TY are more accurate than ever. Targets are fully undistorted, and FOV is determined wholly by camera intrinsics.
2023.4.0 (2/18/23)
Neural Detector Class Filter
Specify the classes you want to track for easy filtering of unwanted detections.
Neural Detector expanded support
Support any input resolution, support additional output shapes to support other object detection architectures. EfficientDet0-based models are now supported.
2023.3.1 (2/14/23)
AprilTag Accuracy Improvements
Improved intrinsics matrix and, most importantly, improved distortion coefficients for all models. Noticeable single AprilTag Localization improvements.
Detector Upload
Detector upload fixed.
2023.3 (2/13/23)
Capture Latency (NT Key: "cl", JSON Results: "cl")
The new capture latency entry represents the time between the end of the exposure of the middle row of Limelight's image sensor and the beginning of the processing pipeline.
New Quality Threshold for AprilTags
Spurious AprilTags are now more easily filtered out with the new Quality Threshold slider. The default value set in 2023.3 should remove most spurious detections.
Camera Pose in Robot Space Override (NT Keys: "camerapose_robotspace_set", "camerapose_robotspace")
Your Limelight's position in robot space may now be adjusted on-the-fly. If the key is set to an array of zeros, the pose set in the web interface is used.
Here's an example of a Limelight on an elevator:
Increased Max Exposure
The maximum exposure time is now 33ms (up from 12.5 ms). High-fps capture modes are still limited to (1/fps) seconds. 90hz pipelines, for example, will not have brighter images past 11ms exposure time.
Botpose updates
All three botpose arrays in networktables have a seventh entry representing total latency (capture latency + targeting latency).
Bugfixes
- Fix LL3 MJPEG streams in shuffleboard
- Fix camMode - driver mode now produces bright, usable images.
- Exposure label has been corrected - each "tick" represents 0.01ms and not 0.1 ms
- Fix neural net detector upload
2023.2 (1/28/23)
Making 3D easier than ever.
WPILib-compatible Botposes
Botpose is now even easier to use out-of-the-box.
These match the WPILib Coordinate systems.
All botposes are printed directly in the field-space visualizer in the web interface, making it easy to confirm at a glance that everything is working properly.
Easier access to 3D Data (Breaking Changes)
RobotPose in TargetSpace is arguably the most useful data coming out of Limelight with repsect to AprilTags. Using this alone, you can perfectly align a drivetrain with an AprilTag on the field.
- NetworkTables Key “campose” is now “camerapose_targetspace”
- NetworkTables Key “targetpose” is now “targetpose_cameraspace”
- New NetworkTables Key - “targetpose_robotspace”
- New NetworkTables Key - “botpose_targetspace”
Neural Net Upload
Upload teachable machine models to the Limelight Classifier Pipeline. Make sure they are Tensorflow Lite EdgeTPU compatible models. Upload .tflite and .txt label files separately.
2023.1 (1/19/23)
MegaTag and Performance Boosts
Correcting A Mistake
The default marker size parameter in the UI has been corrected to 152.4mm (down from 203.2mm). This was the root of most accuracy issues.
Increased Tracking Stability
There are several ways to tune AprilTag detection and decoding. We’ve improved stability across the board, especially in low light / low exposure environments.
Ultra Fast Grayscaling
Grayscaling is 3x-6x faster than before. Teams will always see a gray video stream while tracking AprilTags.
Cropping For Performance
AprilTag pipelines now have crop sliders. Cropping your image will result in improved framerates at any resolution.
Easier Filtering
There is now a single “ID filter” field in AprilTag pipelines which filters JSON output, botpose-enabled tags, and tx/ty-enabled tags. The dual-filter setup was buggy and confusing.
Breaking Change
The NT Key “camtran” is now “campose”
JSON update
"botpose" is now a part of the json results dump
Field Space Visualizer Update
The Field-space visualizer now shows the 2023 FRC field. It should now be easier to judge botpose accuracy at a glance.
Limelight MegaTag (new botpose)
My #1 priority has been rewriting botpose for greater accuracy, reduced noise, and ambiguity resilience. Limelight’s new botpose implementation is called MegaTag. Instead of computing botpose with a dumb average of multiple individual field-space poses, MegaTag essentially combines all tags into one giant 3D tag with several keypoints. This has enormous benefits.
The following GIF shows a situation designed to induce tag flipping: Green Cylinder: Individual per-tag bot pose Blue Cylinder: 2023.0.1 BotPose White Cylinder: New MegaTag Botpose
Notice how the new botpose (white cylinder) is extremely stable compared to the old botpose (blue cylinder). You can watch the tx and ty values as well.
Here’s the full screen, showing the tag ambiguity:
Here are the advantages:
Botpose is now resilient to ambiguities (tag flipping) if more than one tag is in view (unless they are close and coplanar. Ideally the keypoints are not coplanar). Botpose is now more resilient to noise in tag corners if more than one tag is in view. The farther away the tags are from each other, the better. This is not restricted to planar tags. It scales to any number of tags in full 3D and in any orientation. Floor tags and ceiling tags would work perfectly.
Here’s a diagram demonstrating one aspect of how this works with a simple planar case. The results are actually better than what is depicted, as the MegaTag depicted has a significant error applied to three points instead of one point. As the 3D combined MegaTag increases in size and in keypoint count, its stability increases.
Nerual Net upload is being pushed to 2023.2!
2023.0.0 and 2023.0.1 (1/11/23)
Introducing AprilTags, Robot localization, Deep Neural Networks, a rewritten screenshot interface, and more.
Features, Changes, and Bugfixes
- New sensor capture pipeline and Gain control
- Our new capture pipeline allows for exposure times 100x shorter than what they were in 2022. The new pipeline also enables Gain Control. This is extremely important for AprilTags tracking, and will serve to make retroreflective targeting more reliable than ever. Before Limelight OS 2023, Limelight's sensor gain was non-deterministic (we implemented some tricks to make it work anyways).
- With the new "Sensor Gain" slider, teams can make images darker or brighter than ever before without touching the exposure slider. Increasing gain will increase noise in the image.
- Combining lower gain with the new lower exposure times, it is now possible to produce nearly completely black images with full-brightness LEDs and retroreflective targets. This will help mitigate LED and sunlight reflections while tracking retroreflective targets.
- By increasing Sensor Gain and reducing exposure, teams will be able to minimize the effects of motion blur due to high exposure times while tracking AprilTags.
- We have managed to develop this new pipeline while retaining all features - 90fps, hardware zoom, etc.
- More Resolution Options
- There two new capture resolutsions for LL1, LL2, and LL2+: 640x480x90fps, and 1280x960x22fps
- Optimized Web Interface
- The web gui will now load and initialize up to 3x faster on robot networks.
- Rewritten Snapshots Interface
- The snapshots feature has been completely rewritten to allow for image uploads, image downloads, and image deletion. There are also new APIs for capturing snapshots detailed in the documentation.
- SolvePnP Improvements
- Our solvePnP-based camera localization feature had a nasty bug that was seriously limiting its accuracy every four frames. This has been addressed, and a brand new full 3D canvas has been built for Retroreflective/Color SolvePNP visualizations.
- Web Interface Bugfix
- There was an extremely rare issue 2022 that caused the web interface to permanently break during the first boot after flashing, which would force the user to re-flash. The root cause was found and fixed for good.
- New APIs
- Limelight now include REST and Websocket APIs. REST, Websocket, and NetworkTables APIs all support the new JSON dump feature, which lists all data for all targets in a human readable, simple-to-parse format for FRC and all other applications.
Zero-Code Learning-Based Vision & Google Coral Support
- Google Coral is now supported by all Limelight models. Google Coral is a 4TOPs (Trillions-of-Operations / second) USB hardware accelerator that is purpose built for inference on 8-bit neural networks.
- Just like retroreflective tracking a few years ago, the barrier to entry for learning-based vision on FRC robots has been too high for the average team to even make an attempt. We have developed all of the infrastructure required to make learning-based vision as easy as retroreflective targets with Limelight.
- We have a cloud GPU cluster, training scripts, a dataset aggregation tool, and a human labelling team ready to go. We are excited to bring deep neural networks to the FRC community for the first time.
- We currently support two types of models: Object Detection models, and Image classification models.
- Object detection models will provide "class IDs" and bounding boxes (just like our retroreflective targets) for all detected objects. This is perfect for real-time game piece tracking.
- Please contribute to the first-ever FRC object detection model by submitting images here: https://datasets.limelightvision.io/frc2023
- Use tx, ty, ta, and tclass networktables keys or the JSON dump to use detection networks
- Image classification models will ingest an image, and produce a single class label.
- To learn more and to start training your own models for Limelight, check out Teachable Machine by google.
- https://www.youtube.com/watch?v=T2qQGqZxkD0
- Teachable machine models are directly compatible with Limelight.
- Image classifiers can be used to classify internal robot state, the state of field features, and so much more.
- Use the tclass networktables key to use these models.
- Object detection models will provide "class IDs" and bounding boxes (just like our retroreflective targets) for all detected objects. This is perfect for real-time game piece tracking.
- Limelight OS 2023.0 does not provide the ability to upload custom models. This will be enabled shortly in 2023.1
Zero-Code AprilTag Support
- AprilTags are as easy as retroreflective targets with Limelight. Because they have a natural hard filter in the form of an ID, there is even less of a reason to have your roboRIO do any vision-related filtering.
- To start, use tx, ty, and ta as normal. Zero code changes are required. Sort by any target characteristic, utilize target groups, etc.
- Because AprilTags both always square and always uniquely identifiable, they provide the perfect platform for full 3D pose calculations.
- The feedback we've received for this feature in our support channels has been extremely positive. We've made AprilTags as easy as possible, from 2D tracking to a full 3D robot localization on the field
- Check out the Field Map Specification and Coordinate System Doc for more detailed information.
- There are four ways to use AprilTags with Limelight:
- AprilTags in 2D
- Use tx, ty, and ta. Configure your pipelines to seek out a specific tag ID.
<gif>
- Point-of-Interest 3D AprilTags
- Use tx and ty, ta, and tid networktables keys. The point of interest offset is all most teams will need to track targets do not directly have AprilTags attached to them.
<gif>
- Full 3D
- Track your LL, your robot, or tags in full 3D. Use campose or json to pull relevant data into your roboRio.
<gif>
- Field-Space Robot Localization
- Tell your Limelight how it's mounted, upload a field map, and your LL will provide the field pose of your robot for use with the WPILib Pose Estimator.
- Our field coordinate system places (0,0) at the center of the field instead of a corner.
- Use the botpose networktables key for this feature.
<gif>
2022.3.0 (4/13/22)
Bugfixes and heartbeat.
Bugfixes
- Fix performance, stream stability, and stream lag issues related to USB Camera streams and multiple stream instances.
Features and Changes
- "hb" Heartbeat NetworkTable key
- The "hb" value increments once per processing frame, and resets to zero at 2000000000.
2022.2.3 (3/16/22)
Bugfixes and robot-code crop filtering.
Bugfixes
- Fix "stream" networktables key and Picture-In-Picture Modes
- Fix "snapshot" networktables key. Users must set the "snapshot" key to "0" before setting it to "1" to take a screenshot.
- Remove superfluous python-related alerts from web interface
Features and Changes
- Manual Crop Filtering
- Using the "crop" networktables array, teams can now control crop rectangles from robot code.
- For the "crop" key to work, the current pipeline must utilize the default, wide-open crop rectangle (-1 for minX and minY, +1 for maxX and +1 maxY).
- In addition, the "crop" networktable array must have exactly 4 values, and at least one of those values must be non-zero.
2022.2.2 (2/23/22)
Mandatory upgrade for all teams based on Week 0 and FMS reliability testing.
Bugfixes
- Fix hang / loss of connection / loss of targeting related to open web interfaces, FMS, FMS-like setups, Multiple viewer devices etc.
Features and Changes
-
Crop Filtering
- Ignore all pixels outside of a specified crop rectangle
- If your flywheel has any sweet spots on the field, you can make use of the crop filter to ignore the vast majority of pixels in specific pipelines. This feature should help teams reduce the probability of tracking non-targets.
- If you are tracking cargo, use this feature to look for cargo only within a specific part of the image. Consider ignoring your team's bumpers, far-away targets, etc.
-
Corners feature now compatible with smart target grouping
- This one is for the teams that want to do more advanced custom vision on the RIO
- "tcornxy" corner limit increased to 64 corners
- Contour simplification and force convex features now work properly with smart target grouping and corner sending
-
IQR Filter max increased to 3.0
-
Web interface live target update rate reduced from 30fps to 15fps to reduce bandwidth and cpu load while the web interface is open
2022.1 (1/25/22)
Bugfixes
- We acquired information from one of our suppliers about an issue (and a fix!) that affects roughly 1/75 of the CPUs specifically used in Limelight 2 (it may be related to a specific batch). It makes sense, and it was one of the only remaining boot differences between the 2022 image and the 2020 image.
- Fix the upload buttons for GRIP inputs and SolvePNP Models
Features
-
Hue Rainbow
- The new hue rainbow makes it easier to configure the hue threshold.
-
Hue Inversion
- The new hue inversion feature is a critical feature if you want to track red objects, as red is at both the beginning and the end of the hue range:
-
New Python Libraries
- Added scipy, scikit-image, pywavelets, pillow, and pyserial to our python sandbox.
2022.0 and 2022.0.3 (1/15/22)
This is a big one. Here are the four primary changes:
Features
-
Smart Target Grouping
- Automatically group targets that pass all individual target filters.
- Will dynamically group any number of targets between -group size slider minimum- and -group size slider maximum-
-
Outlier Rejection
- While this goal is more challenging than other goals, it gives us more opportunities for filtering. Conceptually, this goal is more than a “green blob.” Since we know that the goal is comprised of multiple targets that are close to each other, we can actually reject outlier targets that stand on their own.
- You should rely almost entirely on good target filtering for this year’s goal, and only use outlier rejection if you see or expect spurious outliers in your camera stream. If you have poor standard target filtering, outlier detection could begin to work against you!
-
Limelight 2022 Image Upgrades We have removed hundreds of moving parts from our software. These are the results:
- Compressed Image Size: 1.3 GB in 2020 → 76MB for 2022 (Reduced by a factor of 17!)
- Download time: 10s of minutes in 2020 → seconds for 2022
- Flash time: 5+ minutes in 2020 → seconds for 2022
- Boot time: 35+ seconds in 2020 → 14 seconds for 2022 (10 seconds to LEDS on)
-
Full Python Scripting
- Limelight has successfully exposed a large number of students to some of the capabilities of computer vision in robotics. With python scripting, teams can now take another step forward by writing their own image processing pipelines.
-
This update is compatible with all Limelight Hardware, including Limelight 1.
-
Known issues: Using hardware zoom with python will produce unexpected results.
-
2022.0.3 restores the 5802 GRIP stream, and addresses boot issues on some LL2 units by reverting some of the boot time optimizations. Boot time is increased to 16 seconds.