AI Vision Sensor#
Introduction#
The AI Vision Sensor can detect and track objects, colors, and AprilTags. This allows the robot to analyze its surroundings, follow objects, and react based on detected visual data.
For the examples below, the configured AI Vision Sensor will be named ai_vision_1
, and the configured Color Signature objects, such as RED_BOX
, will be used in all subsequent examples throughout this API documentation when referring to AiVision
class methods.
Below is a list of all methods:
Getters – Get data from the AI Vision Sensor.
take_snapshot – Captures data for a specific Signature.
installed – Whether the AI Vision Sensor is connected to the IQ (2nd gen) Brain.
Properties – Object data returned from take_snapshot.
.exists – Whether the object exists in the current detection as a Boolean.
.width – Width of the detected object in pixels.
.height – Height of the detected object in pixels.
.centerX – X position of the object’s center in pixels.
.centerY – Y position of the object’s center in pixels.
.angle – Orientation of the Color Code in degrees.
.originX – X position of the object’s top-left corner in pixels.
.originY – Y position of the object’s top-left corner in pixels.
.id – Classification or tag ID of the object.
.score – Confidence score for AI Classifications (1–100).
Constructors – Manually initialize and configure the sensors.
take_snapshot#
take_snapshot
filters the data from the AI Vision Sensor’s frame to return a tuple. The AI Vision Sensor can detect configured Color Signatures and Color Codes, AI Classifications, and AprilTags.
Color Signatures and Color Codes must be configured first in the Vision Utility before they can be used with this method.
The tuple stores objects ordered from largest to smallest by width, starting at index 0. Each object’s properties can be accessed using its index. An empty tuple is returned if no matching objects are detected.
Usage:
ai_vision_1.take_snapshot(SIGNATURE)
Parameters |
Description |
---|---|
|
What signature to get data of.
|
|
Optional. Sets the maximum number of objects that can be returned from 1 to 24 (default: 8). |
# Move forward if an object is detected
while True:
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
if objects:
drivetrain.drive_for(FORWARD, 50, MM)
wait(50, MSEC)
Color Signatures#
A Color Signature is a unique color that the AI Vision Sensor can recognize. These signatures allow the AI Vision Sensor to detect and track objects based on their color. Once a Color Signature is configured, the sensor can identify objects with that specific color in its field of view. Color signatures are used with take_snapshot to process and detect colored objects in real-time.
In order to use a configured Color Signature in a project, its name must be the name of the sensor, two underscores, and then the Color Signature’s name. For example: ai_vision_1__RED_BOX
.
# Display if any objects match the RED_BOX signature
while True:
brain.screen.set_cursor(1, 1)
brain.screen.clear_row(1)
# Change to any configured Color Signature
red_box = ai_vision_1.take_snapshot(ai_vision_1__RED_BOX)
if red_box:
brain.screen.print("Color detected!")
wait(100, MSEC)
Color Codes#
A Color Code is a structured pattern made up of color signatures arranged in a specific order. These codes allow the AI Vision Sensor to recognize predefined patterns of colors. Color Codes are useful for identifying complex objects or creating unique markers for autonomous navigation.
In order to use a configured Color Code in a project, its name must be the name of the sensor, two underscores, and then the Color Code’s name. For example: ai_vision_1__BOX_CODE
.
# Display if any objects match the BOX_CODE code
while True:
brain.screen.set_cursor(1, 1)
brain.screen.clear_row(1)
# Change to any configured Color Code
box_code = ai_vision_1.take_snapshot(ai_vision_1__BOX_CODE)
if box_code:
brain.screen.print("Code detected!")
wait(100, MSEC)
installed#
installed
returns a Boolean indicating whether the AI Vision Sensor is currently connected to the IQ (2nd gen) Brain.
True
– The AI Vision Sensor is connected to the IQ (2nd gen) Brain.False
– The AI Vision Sensor is not connected to the IQ (2nd gen) Brain.
Usage:
ai_vision_1.installed()
Parameters |
Description |
---|---|
This method has no parameters. |
# Display a message if the AI Vision Sensor is connected
if ai_vision_1.installed():
brain.screen.print("Installed!")
Properties#
There are ten properties that are included with each object stored in a tuple after take_snapshot is used.
Some property values are based off of the detected object’s position in the AI Vision Sensor’s view at the time that take_snapshot
was used. The AI Vision Sensor has a resolution of 320 by 240 pixels.
.exists#
.exists
returns a Boolean indicating if the index exists in the tuple or not.
True
– The index exists.False
– The index does not exist.
# Check if at least at least two objects are detected
while True:
brain.screen.clear_screen()
brain.screen.set_cursor(1, 1)
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
if objects:
if objects[1].exists:
brain.screen.print("At least 2")
else:
brain.screen.print("Less than 2")
wait(50, MSEC)
.width#
.width
returns the width of the detected object in pixels, which is an integer between 1 and 320.
# Approach an object until it's at least 100 pixels wide
while True:
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
if objects:
if objects[0].width < 100:
drivetrain.drive(FORWARD)
else:
drivetrain.stop()
wait(50, MSEC)
.height#
.height
returns the height of the detected object in pixels, which is an integer between 1 and 240.
# Approach an object until it's at least 90 pixels tall
while True:
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
if objects:
if objects[0].height < 90:
drivetrain.drive(FORWARD)
else:
drivetrain.stop()
wait(50, MSEC)
.centerX#
.centerX
returns the x-coordinate of the detected object’s center in pixels, which is an integer between 0 and 320.
# Turn until an object is directly in front of the sensor
drivetrain.set_turn_velocity(10, PERCENT)
drivetrain.turn(RIGHT)
while True:
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
if objects:
if 140 < objects[0].centerX < 180:
drivetrain.stop()
wait(10,MSEC)
.centerY#
.centerY
returns the y-coordinate of the detected object’s center in pixels, which is an integer between 0 and 240.
# Approach an object until the object's center is
# high enough in the field of view
while True:
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
if objects:
if objects[0].centerY < 150:
drivetrain.drive(FORWARD)
else:
drivetrain.stop()
wait(50, MSEC)
.angle#
.angle
returns the orientation of the detected Color Code or AprilTag in degrees, which is an integer between 0 and 360.
# Turn left or right depending on how a
# configured Color Code is rotated
while True:
box_code = ai_vision_1.take_snapshot(ai_vision_1__BOX_CODE)
if box_code:
if 50 < box_code[0].angle < 100:
drivetrain.turn(RIGHT)
elif 270 < box_code[0].angle < 330:
drivetrain.turn(LEFT)
else:
drivetrain.stop()
else:
drivetrain.stop()
wait(50, MSEC)
.originX#
.originX
returns the x-coordinate of the top-left corner of the detected object’s bounding box in pixels, which is an integer between 0 and 320.
# Display if an object is to the left or the right
while True:
brain.screen.clear_screen()
brain.screen.set_cursor(1,1)
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
if objects:
if objects[0].originX < 120:
brain.screen.print("To the left!")
else:
brain.screen.print("To the right!")
else:
brain.screen.print("No objects")
wait(100, MSEC)
.originY#
.originY
returns the y-coordinate of the top-left corner of the detected object’s bounding box in pixels, which is an integer between 0 and 240.
# Display if an object is close or far
while True:
brain.screen.clear_screen()
brain.screen.set_cursor(1,1)
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
if objects:
if objects[0].originY < 110:
brain.screen.print("Close")
else:
brain.screen.print("Far")
.id#
.id
returns the ID of the detected AI Classification or AprilTag as an integer.
AI Classification |
ID |
---|---|
Blue Ball |
0 |
Green Ball |
1 |
Red Ball |
2 |
Blue Ring |
3 |
Green Ring |
4 |
Red Ring |
5 |
Blue Cube |
6 |
Green Cube |
7 |
Red Cube |
8 |
For an AprilTag, the .id property represents the detected AprilTag’s ID number in the range of 0 to 36. For an AI Classification, the id corresponds to the predefined id as shown below.
# Move forward when AprilTag 1 is detected
while True:
apriltags = ai_vision_1.take_snapshot(AiVision.ALL_TAGS)
if apriltags:
if apriltags[0].id == 1:
robot.drive(FORWARD)
else:
drivetrain.stop()
wait(50, MSEC)
.score#
.score
returns the confidence score of the detected AI Classification as an integer between 1 and 100.
# Display if a score is confident
while True:
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
brain.screen.clear_screen()
brain.screen.set_cursor(1, 1)
if objects:
if objects[0].score > 95:
brain.screen.print("Confident")
else:
brain.screen.print("Not confident")
wait(50, MSEC)
Constructors#
Constructors are used to manually create AiVision
, Colordesc
, and Codedesc
objects, which are necessary for configuring the AI Vision Sensor outside of VEXcode.
AI Vision Sensor#
AiVision
creates an AI Vision Sensor.
Usage
Vision(port, sigs)
Parameters |
Description |
---|---|
|
Which Smart Port the AI Vision Sensor is connected to, from 1 to 12. |
|
Optional. The name of one or more signatures:
|
Example
ai_vision_1 = AiVision(Ports.PORT1, AiVision.ALL_AIOBJS)
# Move forward if an object is detected
while True:
objects = ai_vision_1.take_snapshot(AiVision.ALL_AIOBJS)
if objects:
drivetrain.drive_for(FORWARD, 50, MM)
wait(50, MSEC)
Color Signature#
Colordesc
creates a Color Signature. Up to seven different Color Signatures can be stored on an AI Vision Sensor at once.
Usage:
Colordesc(index, uMin, uMax, uMean, vMin, vMax, vMean, rgb, type)
Parameter |
Description |
---|---|
|
The |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
In order to obtain the values to create a Color Signature, go to the Vision Utility. Once a Color Signature is configured, copy the parameter values from the Configuration window.
Example
# Create a new Signature RED_BOX with the Colordesc class
RED_BOX = Colordesc(1, 10121, 10757, 10439,-1657, -1223, -1440,2.5, 1)
# Create a new AI Vision Sensor "ai_vision_1" with the AiVision
# class, with the RED_BOX Signature.
ai_vision_1 = AiVision(Ports.PORT1, 100, RED_BOX)
# Move forward if a red object is detected
while True:
red_object = ai_vision_1.take_snapshot(RED_BOX)
if red_object:
drivetrain.drive_for(FORWARD, 10, MM)
wait(5, MSEC)
Color Code#
Codedesc
creates a Color Code. It requires at least two already defined Color Signatures in order to be used. Up to eight different Color Codes can be stored on a Vision Sensor at once.
Usage:
Codedesc(sig1, sig2, sig3, sig4, sig5)
Parameter |
Description |
---|---|
|
A previously created Color Signature. |
|
A previously created Color Signature. |
|
Optional. A previously created Color Signature. |
|
Optional. A previously created Color Signature. |
|
Optional. A previously created Color Signature. |
Example
# Create two new Signatures for a red and blue box
RED_BOX = Colordesc(1, 10121, 10757, 10439,-1657, -1223, -1440, 2.5, 1)
BLUE_BOX = Colordesc(2, -4443, -3373, -3908,6253, 7741, 6997, 2.5, 1)
# Create a Color Code for a red box to the left of a blue box
RED_BLUE = Codedesc(RED_BOX, BLUE_BOX)
# Create a new AI Vision Sensor "ai_vision_1" with the AiVision
# class, with the RED_BOX and BLUE_BOX Signatures.
ai_vision_1 = AiVision(Ports.PORT1, 100, RED_BOX, BLUE_BOX)
# Display a message if Color Code is detected
while True:
brain.screen.set_cursor(1, 1)
brain.screen.clear_row(1)
# Change to any configured Color Code
box_code = ai_vision_1.take_snapshot(RED_BLUE)
if box_code:
brain.screen.print("Code detected!")
wait(100, MSEC)