AI Vision Sensor#
Introduction#
The aivision class is used to control and access data from the V5 AI Vision Sensor. The AI Vision Sensor can detect:
AI Classifications (such as game objects)
AprilTags IDs
Custom Color Signatures
Custom Color Codes
It provides object data including position, size, orientation, classification ID, and confidence score.
The sensor processes visual information using an onboard AI model selected in the AI Vision Utility within VEXcode. The selected model determines which AI Classifications the sensor can detect. When using VS Code, the AI model must first be configured in VEXcode before it can be used in your program. Detected objects are returned through the objects array after takeSnapshot is called.
Class Constructors#
aivision(
int32_t index,
Args&... sigs );
Parameters#
Parameter |
Type |
Description |
|---|---|---|
|
|
The Smart Port that the AI Vision Sensor is connected to, written as |
|
|
One or more detection types to register with the sensor:
|
AI Models and Classifications#
The AI Vision Sensor can detect different objects with certain AI Classifications. Depending on the AI Classification model selected when configuring the AI Vision Sensor in the Devices window, different objects can be detected. The currently available models are:
Classroom Elements
ID Number |
AI Classification |
|---|---|
0 |
|
1 |
|
2 |
|
3 |
|
4 |
|
5 |
|
6 |
|
7 |
|
8 |
|
V5RC High Stakes
ID Number |
AI Classification |
|---|---|
0 |
|
1 |
|
2 |
|
V5RC Push Back
ID Number |
AI Classification |
|---|---|
0 |
|
1 |
|
Examples#
// Create Color Signatures
aivision::colordesc AIVision1__greenBox(
1, // index
85, // red
149, // green
46, // blue
23, // hangle
0.23 ); // hdsat
aivision::colordesc AIVision1__blueBox(
2, // index
77, // red
135, // green
125, // blue
27, // hangle
0.29 ); // hdsat
// Create a Color Code from two color signatures
aivision::codedesc AIVision1__greenBlue(
1, // code index
AIVision1__greenBox, // first color signature
AIVision1__blueBox ); // second color signature
// Create the AI Vision Sensor instance
aivision AIVision1(
PORT11, // Smart Port
AIVision1__greenBlue, // color code
aivision::ALL_AIOBJS ); // enable AI Classifications
Member Functions#
The aivision class includes the following member functions:
takeSnapshot— Captures data for a specific Color Signature, Color Code, AI Classification group, or AprilTag group.installed— Returns whether the AI Vision Sensor is connected to the V5 Brain.
To access detected object data after calling takeSnapshot, use the available Properties.
Before calling any aivision member functions, an aivision instance must be created, as shown below:
/* This constructor is required when using VS Code.
AI Vision Sensor configuration is generated automatically
in VEXcode using the Device Menu. Replace the values
as needed. */
// Create Color Signatures
aivision::colordesc AIVision1__greenBox(
1, // index
85, // red
149, // green
46, // blue
23, // hangle
0.23 ); // hdsat
aivision::colordesc AIVision1__blueBox(
2, // index
77, // red
135, // green
125, // blue
27, // hangle
0.29 ); // hdsat
// Create a Color Code from two color signatures
aivision::codedesc AIVision1__greenBlue(
1, // code index
AIVision1__greenBox, // first color signature
AIVision1__blueBox ); // second color signature
// Create the AI Vision Sensor instance
aivision AIVision1(
PORT1, // Smart Port
AIVision1__greenBlue, // color code
aivision::ALL_AIOBJS ); // enable AI Classifications
takeSnapshot#
Captures an image from the AI Vision Sensor, processes it using the selected AI model or configured color signatures, and updates the objects array.
Each call refreshes the objects array with the most recent detection results. Objects are ordered from largest to smallest (by width), beginning at index 0. If no objects are detectedobjectCount will be 0 and objects[i].exists will be false.
1 — Takes a snapshot using an object ID and object type.
int32_t takeSnapshot( uint32_t id, objectType type, uint32_t count );
2 — Takes a snapshot using a Color Signature.
int32_t takeSnapshot( const colordesc &desc, int32_t count = 8 );
3 — Takes a snapshot using a Color Code.
int32_t takeSnapshot( const codedesc &desc, int32_t count = 8 );
4 — Takes a snapshot using an AprilTag ID.
int32_t takeSnapshot( const tagdesc &desc, int32_t count = 8 );
5 — Takes a snapshot using an AI Classification.
int32_t takeSnapshot( const aiobjdesc &desc, int32_t count = 8 );
Parameters6 — Takes a snapshot using an Object Descriptor.
int32_t takeSnapshot( const objdesc &desc, int32_t count = 8 );
Parameter |
Type |
Description |
|---|---|---|
|
|
The identifier of the object to detect when using the
Note: In VEXcode, AI Classification names (such as blueBall) may be used directly. In VS Code, the numeric ID must be used. |
|
|
Specifies the category of object associated with
|
|
|
Descriptor used to detect a specific object. Passed directly to
|
|
|
Maximum number of objects stored from the snapshot. Defaults to 8. |
Return Values
Returns an int32_t representing the number of detected objects matching the specified signature or detection type.
Notes
The AI Vision Sensor must take a snapshot before object data can be accessed.
The
objectsarray is refreshed on every call.When
countis specified, only the largest detected objects (up to the specified amount) are stored.
AI Classifications depend on the model selected in the AI Vision Utility in VEXcode.
Examples
// Move forward if an object is detected
while (true) {
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
Drivetrain.driveFor(forward, 50, mm);
}
wait(50, msec);
}
installed#
Returns whether the AI Vision Sensor is connected to the V5 Brain.
Available Functions
bool installed();
Parameters
This function does not take any parameters.
Return Values
Returns a Boolean indicating whether the AI Vision Sensor is connected:
true— The AI Vision Sensor is connected.false— The AI Vision Sensor is not connected.
// Display a message if the AI Vision Sensor is connected
if (AIVision1.installed()){
Brain.Screen.print("Installed!");
}
Properties#
Calling takeSnapshot updates the AI Vision Sensor’s detection results. Each snapshot refreshes the objects array, which contains detected objects for the requested AI Classification, Color Signature, Color Code, or AprilTag ID.
AI Vision data is accessed through properties of objects stored in AIVisionSensor.objects[index], where index begins at 0.
Objects are ordered from largest to smallest (by area).
The AI Vision Sensor image resolution is 320 × 240 pixels. Object position and size values are reported in pixel units relative to the sensor’s current view.
The following properties are available:
objectCount— Returns the number of detected objects from the most recent snapshot.largestObject— Selects the largest detected object from the most recent snapshot.objects— Array containing detected objects updated bytakeSnapshot..exists— Whether the object entry contains valid data..width— Width of the detected object in pixels..height— Height of the detected object in pixels..centerX— X position of the object’s center in pixels..centerY— Y position of the object’s center in pixels..originX— X position of the object’s top-left corner in pixels..originY— Y position of the object’s top-left corner in pixels..angle— Orientation of a Color Code or AprilTag ID in degrees..id— Classification ID or AprilTag ID..score— Confidence score for AI Classifications.
objectCount#
objectCount returns the number of items inside the objects array as an integer.
AIVisionSensor.objectCount
Component |
Description |
|---|---|
|
The name of your AI Vision Sensor instance. |
Examples
// Display the number of detected objects
while (true) {
Brain.Screen.setCursor(1, 1);
Brain.Screen.clearLine(1);
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
Brain.Screen.print("%d", AIVision1.objectCount);
}
wait(50, msec);
}
largestObject#
largestObject retrieves the largest detected object from the objects array.
This method can be used to always get the largest object from objects without specifying an index.
AIVisionSensor.largestObject
Component |
Description |
|---|---|
|
The name of your AI Vision Sensor instance. |
Examples
// Display the closest AprilTag's ID
while (true) {
Brain.Screen.setCursor(1, 1);
Brain.Screen.clearLine(1);
AIVision1.takeSnapshot(aivision::ALL_TAGS);
if (AIVision1.objects[0].exists) {
Brain.Screen.print("%d", AIVision1.largestObject.id);
}
wait(50, msec);
}
objects#
objects returns an array of detected object properties. Use the array to access specific property values of individual objects.
There are ten properties that are included with each object stored in the objects array after takeSnapshot is used.
Some property values are based off of the detected object’s position in the AI Vision Sensor’s view at the time that takeSnapshot was used. The AI Vision Sensor has a resolution of 320 by 240 pixels.
AIVisionSensor.objects[index].property
Component |
Description |
|---|---|
|
The name of your AI Vision Sensor instance. |
|
The object index in the array. Index begins at |
|
One of the available object properties. |
.exists#
Indicates whether the specified object index contains a valid detected object.
Access
SensorName.objects[index].exists
Return Values
Returns a Boolean indicating whether the specified object index contains a valid detected object:
-
true— A valid object exists at the specified index. -
false— No object exists at the specified index.
Examples
// Move forward if an object is detected
while (true) {
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
Drivetrain.driveFor(forward, 50, mm);
}
wait(50, msec);
}
.width#
Returns the width of the detected object.
Access
SensorName.objects[index].width
Return Values
Returns an int16_t representing the width of the detected object in pixels. The value ranges from 1 to 320.
Examples
// Approach an object until it's at least 100 pixels wide
while (true) {
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
if (AIVision1.objects[0].width < 100) {
Drivetrain.drive(forward);
} else {
Drivetrain.stop();
}
} else {
Drivetrain.stop();
}
wait(50, msec);
}
.height#
Returns the height of the detected object.
Access
SensorName.objects[index].height
Return Values
Returns an int16_t representing the height of the detected object in pixels. The value ranges from 1 to 240.
Examples
// Approach an object until it's at least 90 pixels tall
while (true) {
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
if (AIVision1.objects[0].height < 90) {
Drivetrain.drive(forward);
} else {
Drivetrain.stop();
}
} else {
Drivetrain.stop();
}
wait(50, msec);
}
.centerX#
Returns the x-coordinate of the detected object’s center.
Access
SensorName.objects[index].centerX
Return Values
Returns an int16_t representing the x-coordinate of the object’s center in pixels. The value ranges from 0 to 320.
Examples
// Turn until an object is directly in front of the sensor
Drivetrain.setTurnVelocity(10, percent);
Drivetrain.turn(right);
while (true) {
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
if (AIVision1.objects[0].centerX > 140 && AIVision1.objects[0].centerX < 180) {
Drivetrain.stop();
}
}
wait(10, msec);
}
.centerY#
Returns the y-coordinate of the detected object’s center.
Access
SensorName.objects[index].centerY
Return Values
Returns an int16_t representing the y-coordinate of the object’s center in pixels. The value ranges from 0 to 240.
Examples
// Approach an object until it's close to the sensor
while (true) {
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
if (AIVision1.objects[0].centerY < 150) {
Drivetrain.drive(forward);
} else {
Drivetrain.stop();
}
} else {
Drivetrain.stop();
}
wait(50, msec);
}
.angle#
Returns the orientation of the detected Color Code or AprilTag ID.
Access
SensorName.objects[index].angle
Return Values
Returns a float representing the rotation of the detected Color Code or AprilTag ID in degrees. The value ranges from 0 to 360.
Examples
// Turn left or right depending on how a configured
// Color Code is rotated
while (true) {
AIVision1.takeSnapshot(AIVision1__redBlue);
if (AIVision1.objects[0].exists) {
if (AIVision1.objects[0].angle > 50 && AIVision1.objects[0].angle < 100) {
Drivetrain.turn(right);
}
else if (AIVision1.objects[0].angle > 270 && AIVision1.objects[0].angle < 330) {
Drivetrain.turn(left);
}
else {
Drivetrain.stop();
}
} else {
Drivetrain.stop();
}
wait(50, msec);
}
.originX#
Returns the x-coordinate of the top-left corner of the detected object’s bounding box.
Access
SensorName.objects[index].originX
Return Values
Returns an int16_t representing the x-coordinate of the object’s bounding box origin in pixels. The value ranges from 0 to 320.
Examples
// Display if an object is to the left or the right
while (true) {
Brain.Screen.clearScreen();
Brain.Screen.setCursor(1, 1);
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
if (AIVision1.objects[0].originX < 120) {
Brain.Screen.print("To the left!");
} else {
Brain.Screen.print("To the right!");
}
} else {
Brain.Screen.print("No objects");
}
wait(100, msec);
}
.originY#
Returns the y-coordinate of the top-left corner of the detected object’s bounding box.
Access
SensorName.objects[index].originY
Return Values
Returns an int16_t representing the y-coordinate of the object’s bounding box origin in pixels. The value ranges from 0 to 240.
Examples
// Display if an object is close or far
while (true) {
Brain.Screen.clearScreen();
Brain.Screen.setCursor(1, 1);
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
if (AIVision1.objects[0].originY < 110) {
Brain.Screen.print("Close");
} else {
Brain.Screen.print("Far");
}
}
wait(100, msec);
}
.id#
Returns the ID of an AprilTag ID or AI Classification.
Access
SensorName.objects[index].id
Return Values
Returns an int32_t representing the ID of the detected object:
For AI Classifications, this corresponds to the ID defined by the selected AI model.
For AprilTags IDs, this represents the detected AprilTag ID number (0–36).
Examples
// Move forward when AprilTag ID 1 is detected
while (true) {
AIVision1.takeSnapshot(aivision::ALL_TAGS);
if (AIVision1.objects[0].exists) {
if (AIVision1.objects[0].id == 1) {
Drivetrain.drive(forward);
}
} else {
Drivetrain.stop();
}
wait(50, msec);
}
.score#
Returns the confidence score of the detected AI Classification.
Access
SensorName.objects[index].score
Return Values
Returns an int16_t indicating the confidence score of the detected AI Classification between 1 and 100.
Examples
// Display if a score is confident
while (true) {
AIVision1.takeSnapshot(aivision::ALL_AIOBJS);
if (AIVision1.objects[0].exists) {
Brain.screen.clearScreen();
Brain.screen.setCursor(1, 1);
if (AIVision1.objects[0].score > 95) {
Brain.Screen.print("Confident");
} else {
Brain.Screen.print("Not confident");
}
}
wait(50, msec);
}
