Complete NetworkTables API
Limelight OS features a NetworkTables 3 Client. It auto-connects to the NetworkTables 4 Server running on FRC Robots based on the Team Number / ID configured in the Settings UI.
Basic Targeting Data
Use the following code:
- Java
- LabView
- C++
- Python
NetworkTableInstance.getDefault().getTable("limelight").getEntry("<variablename>").getDouble(0);
nt::NetworkTableInstance::GetDefault().GetTable("limelight")->GetNumber("<variablename>",0.0);
NetworkTables.getTable("limelight").getNumber('<variablename>');
to retrieve this data:
key | type | description |
---|---|---|
tv | int | 1 if valid target exists. 0 if no valid targets exist |
tx | double | Horizontal Offset From Crosshair To Target (LL1: -27 degrees to 27 degrees / LL2: -29.8 to 29.8 degrees) |
ty | double | Vertical Offset From Crosshair To Target (LL1: -20.5 degrees to 20.5 degrees / LL2: -24.85 to 24.85 degrees) |
txnc | double | Horizontal Offset From Principal Pixel To Target |
tync | double | Vertical Offset From Principal Pixel To Target |
ta | double | Target Area (0% of image to 100% of image) |
tl | double | The pipeline's latency contribution (ms). Add to "cl" to get total latency. |
cl | double | Capture pipeline latency (ms). Time between the end of the exposure of the middle row of the sensor to the beginning of the tracking pipeline. |
tshort | double | Sidelength of shortest side of the fitted bounding box (pixels) |
tlong | double | Sidelength of longest side of the fitted bounding box (pixels) |
thor | double | Horizontal sidelength of the rough bounding box (0 - 320 pixels) |
tvert | double | Vertical sidelength of the rough bounding box (0 - 320 pixels) |
getpipe | int | True active pipeline index of the camera (0 .. 9) |
json | string | Full JSON dump of targeting results |
tclass | string | Class name of primary neural detector result or neural classifier result |
tc | doubleArray | Get the average HSV color underneath the crosshair region (3x3 pixel region) as a NumberArray |
hb | double | heartbeat value. Increases once per frame, resets at 2 billion |
hw | doubleArray | HW metrics [fps, cpu temp, ram usage, temp] |
AprilTag and 3D Data
Use the following code:
- Java
- C++
NetworkTableInstance.getDefault().getTable("limelight").getEntry("<variablename>").getDoubleArray(new double[6]);
nt::NetworkTableInstance::GetDefault().GetTable("limelight")->GetNumberArray("<variablename>",std::vector<double>(6));
to retrieve this data:
key | type | description |
---|---|---|
botpose | doubleArray | Robot transform in field-space. Translation (X,Y,Z) in meters Rotation(Roll,Pitch,Yaw) in degrees, total latency (cl+tl), tag count, tag span, average tag distance from camera, average tag area (percentage of image) |
botpose_wpiblue | doubleArray | Robot transform in field-space (blue driverstation WPILIB origin). Translation (X,Y,Z) in meters Rotation(Roll,Pitch,Yaw) in degrees, total latency (cl+tl), tag count, tag span, average tag distance from camera, average tag area (percentage of image) |
botpose_wpired | doubleArray | Robot transform in field-space (red driverstation WPILIB origin). Translation (X,Y,Z) in meters, Rotation(Roll,Pitch,Yaw) in degrees, total latency (cl+tl), tag count, tag span, average tag distance from camera, average tag area (percentage of image) |
botpose_orb | doubleArray | Robot transform in field-space (Megatag2). Translation (X,Y,Z) in meters Rotation(Roll,Pitch,Yaw) in degrees, total latency (cl+tl), tag count, tag span, average tag distance from camera, average tag area (percentage of image) |
botpose_orb_wpiblue | doubleArray | Robot transform in field-space (Megatag2) (blue driverstation WPILIB origin). Translation (X,Y,Z) in meters Rotation(Roll,Pitch,Yaw) in degrees, total latency (cl+tl), tag count, tag span, average tag distance from camera, average tag area (percentage of image) |
botpose_orb_wpired | doubleArray | Robot transform in field-space (Megatag2) (red driverstation WPILIB origin). Translation (X,Y,Z) in meters, Rotation(Roll,Pitch,Yaw) in degrees, total latency (cl+tl), tag count, tag span, average tag distance from camera, average tag area (percentage of image) |
camerapose_targetspace | doubleArray | 3D transform of the camera in the coordinate system of the primary in-view AprilTag (array (6)) [tx, ty, tz, pitch, yaw, roll] (meters, degrees) |
targetpose_cameraspace | doubleArray | 3D transform of the primary in-view AprilTag in the coordinate system of the Camera (array (6)) [tx, ty, tz, pitch, yaw, roll] (meters, degrees) |
targetpose_robotspace | doubleArray | 3D transform of the primary in-view AprilTag in the coordinate system of the Robot (array (6)) [tx, ty, tz, pitch, yaw, roll] (meters, degrees) |
botpose_targetspace | doubleArray | 3D transform of the robot in the coordinate system of the primary in-view AprilTag (array (6)) [tx, ty, tz, pitch, yaw, roll] (meters, degrees) |
camerapose_robotspace | doubleArray | 3D transform of the camera in the coordinate system of the robot (array (6)) |
tid | int | ID of the primary in-view AprilTag |
priorityid | int (setter) | SET the required ID for tx/ty targeting. Ignore other targets. Does not affect localization |
Camera Controls
Use the following code:
- Java
- LabView
- C++
- Python
NetworkTableInstance.getDefault().getTable("limelight").getEntry("<variablename>").setNumber(<value>);
nt::NetworkTableInstance::GetDefault().GetTable("limelight")->PutNumber("<variablename>",<value>);
NetworkTables.getTable("limelight").putNumber('<variablename>',<value>)
to set this data:
ledMode | Sets limelight's LED state |
---|---|
[0] | use the LED Mode set in the current pipeline |
[1] | force off |
[2] | force blink |
[3] | force on |
camMode | Sets limelight's operation mode |
---|---|
0 | Vision processor |
1 | Driver Camera (Increases exposure, disables vision processing) |
pipeline | Sets limelight's current pipeline |
---|---|
0 .. 9 | Select pipeline 0..9 |
stream | Sets limelight's streaming mode |
---|---|
0 | Standard - Side-by-side streams if a webcam is attached to Limelight |
1 | PiP Main - The secondary camera stream is placed in the lower-right corner of the primary camera stream |
2 | PiP Secondary - The primary camera stream is placed in the lower-right corner of the secondary camera stream |
snapshot | Allows users to take snapshots during a match |
---|---|
0 | Reset snapshot mode |
1 | Take exactly one snapshot |
crop | (Array) Sets the crop rectangle. The pipeline must utilize the default crop rectangle in the web interface. The array must have exactly 4 entries. |
---|---|
[0] | X0 - Min or Max X value of crop rectangle (-1 to 1) |
[1] | X1 - Min or Max X value of crop rectangle (-1 to 1) |
[2] | Y0 - Min or Max Y value of crop rectangle (-1 to 1) |
[3] | Y1 - Min or Max Y value of crop rectangle (-1 to 1) |
camerapose_robotspace_set | (Array) Set the camera's pose in the coordinate system of the robot. |
priorityid | SET the required ID for tx/ty targeting. Ignore other targets. Does not affect localization |
robot_orientation_set | SET Robot Orientation and angular velocities in degrees and degrees per second[yaw,yawrate,pitch,pitchrate,roll,rollrate] |
fiducial_id_filters_set | Override valid fiducial ids for localization (array) |
- Java
- C++
double[] cropValues = new double[4];
cropValues[0] = -1.0;
cropValues[1] = 1.0;
cropValues[2] = -1.0;
cropValues[3] = 1.0;
NetworkTableInstance.getDefault().getTable("limelight").getEntry("crop").setDoubleArray(cropValues);
wip
Python
Python scripts allow for arbitrary inbound and outbound data.
llpython | NumberArray sent by python scripts. This is accessible from robot code. |
llrobot | NumberArray sent by the robot. This is accessible from python scripts. |
Raw Contours
Corners:
Enable "send contours" in the "Output" tab to stream corner coordinates:
tcornxy | Number array of corner coordinates [x0,y0,x1,y1......] |
Raw Targets:
Limelight posts three raw contours to NetworkTables that are not influenced by your grouping mode. That is, they are filtered with your pipeline parameters, but never grouped. X and Y are returned in normalized screen space (-1 to 1) rather than degrees.
rawtargets | [txnc,tync,ta,txnc2,tync2,ta2....] |
Raw Fiducials:
Get all valid (unfiltered) fiducials
rawfiducials | [id,txnc,tync,ta,distToCamera,distToRobot,ambiguity,id2.....] |
Raw Crosshairs:
If you are using raw targeting data, you can still utilize your calibrated crosshairs:
cx0 | Crosshair A X in normalized screen space |
cy0 | Crosshair A Y in normalized screen space |
cx1 | Crosshair B X in normalized screen space |
cy1 | Crosshair B Y in normalized screen space |