aivision#
The AI Vision Sensor must be connected to your V5 Brain and configured in VEXcode V5 before it can be used. Go here for information about Getting Started with the AI Vision Sensor with VEX V5
Refer to these articles for more information about using the AI Vision Sensor.
For more detailed information about using the AI Vision Sensor with Blocks in VEXcode V5, read Coding with the AI Vision Sensor in VEXcode V5 C++.
Initializing the aivision Class#
An AI Vision Sensor is created by using one of the following constructors:
The aivision(port)
constructor creates an aiVision
object in the specified port.
Parameter |
Description |
---|---|
|
A valid Smart Port that the AI Vision Sensor is connected to. |
// Create a new AI Vision Sensor "aiVision" with the aivision class.
aivision aiVision = aivision(PORT1);
The aivision(port, desc, ...)
constructor uses two or more parameters:
Parameter |
Description |
---|---|
|
A valid Smart Port that the AI Vision Sensor is connected to. |
|
The name of one or more colordesc, codedesc, tagdesc, or aiobjdesc objects. |
// Create a new Color Signature "Red" with the colordesc class.
aivision::colordesc Red = aivision::colordesc(1, 207, 19, 25, 10.00, 0.20);
// Create a new AI Vision Sensor "ai_vision" with the AiVision
// class, with the "Red" Colordesc.
aivision aiVision = aivision(PORT1, Red);
This aiVision
object and aiVision__Red
colordesc object will be used in all subsequent examples throughout this API documentation when referring to AiVision class methods.
Class Methods#
takeSnapshot()#
The takeSnapshot
method takes a picture of what the AI Vision Sensor is currently seeing and pulls data from that snapshot that can then be used in a project.
Taking a snapshot will store all of the detected objects that you specified in the AI Vision Sensor’s instance. For example, if you wanted to detect a “Blue” Color Signature, and the AI Vision Sensor detected 3 different blue objects, data from all three would be put in the array.
The takeSnapshot(desc, count)
method takes the current snapshot visible to the AI Vision Sensor and detects the objects of a specified object description.
Parameters |
Description |
---|---|
desc |
|
count |
Optional. The maximum number of objects to obtain. The default is 8. |
Returns: An integer representing the number of objects found matching the description passed as a parameter.
while (true){
// Take a snapshot of the red objects detected by
// the AI Vision Sensor.
aiVision.takeSnapshot(Red);
// Clear the screen/reset so that we can display
// new information.
Brain.Screen.clearScreen();
Brain.Screen.setCursor(1, 1);
// Print the largest detected object's CenterX
// coordinate to the Brain's screen.
Brain.Screen.print("Object Count: %d", aiVision.objectCount);
// Wait 0.5 seconds before repeating the loop and
// taking a new snapshot.
wait(0.5, seconds);
}
objects#
The objects
method allows you to access stored properties of objects from the last taken snapshot.
Available properties:
id
centerX
andcenterY
originX
andoriginY
width
andheight
angle
exists
score
To access an object’s property, use the name of the AI Vision Sensor, followed by the objects method, and then the object’s index. For example: aiVision.objects[0].width
id#
The id
property is only available for AprilTags and AI Classifications.
For an AprilTag, the id
property represents the detected AprilTag(s) ID number.
For AI Classifications, the id
property represents the specific type of AI Classification detected. For more information on what IDs AI Classifications have, go to this article.
To call the id
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
Note: AprilTags are sorted by their unique IDs in ascending order, not by size. For example, if AprilTags 1, 15, and 3 are detected:
AprilTag 1 is at index 0.
AprilTag 3 is at index 1.
AprilTag 15 is at index 2.
To call this property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].id
.
centerX and centerY#
The centerX
and centerY
properties report the center coordinates of the detected object in pixels.
To call the centerX
or centerY
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call a property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].centerX
.
In this example, because the center of the AI Vision Sensor’s view is (160, 120), the robot will turn right until a detected object’s centerX coordinate is greater than 150 pixels, but less than 170 pixels.
while (true) {
// Get a snapshot of all Blue Color objects.
aiVision.takeSnapshot(aiVision__Blue);
// Check to make sure an object was detected in the snapshot before pulling data.
if (aiVision.objectCount > 0) {
if (aiVision.objects[0].centerX > 150.0 && 170.0 > aiVision.objects[0].centerX) {
Drivetrain.turn(right);
} else {
Drivetrain.stop();
}
}
wait(5, msec);
}
originX and originY#
The originX
and originY
properties report the coordinates, in pixels, of the top-left corner of the object’s bounding box.
To call the originX
or originY
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call a property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].originX
.
In this example, the a rectangle will be drawn on the Brain’s screen with the exact measurements of the specified object’s bounding box.
while (true) {
// Get a snapshot of all Blue objects.
aiVision.takeSnapshot(aiVision__Blue);
Brain.Screen.clearScreen();
// Check to make sure an object was detected in the snapshot before pulling data.
if (aiVision.objectCount > 0) {
Brain.Screen.drawRectangle(aiVision.objects[0].originX, aiVision.objects[0].originY, aiVision.objects[0].width, aiVision.objects[0].height);
}
wait(5, msec);
}
width and height#
The width
and height
properties report the width or height of the object in pixels.
To call the width
or height
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call a property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].width
.
In this example, the width of the object is used for navigation. The robot will approach the object until the width has reached a specific size before stopping.
while (true) {
// Get a snapshot of all Blue objects.
aiVision.takeSnapshot(aiVision__Blue);
// Check to make sure an object was detected in the snapshot before pulling data.
if (aiVision.objectCount > 0) {
if (aiVision.objects[0].width < 250.0) {
Drivetrain.drive(forward);
} else {
Drivetrain.stop();
}
}
wait(5, msec);
}
angle#
The angle
property is only available for Color Codes and AprilTags.
This property reports the detected Color Code’s or AprilTag’s angle.
To call the angle
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call this property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].angle
.
In this example, the AprilTag’s angle is printed to the Brain’s screen.
while (true) {
// Get a snapshot of all AprilTags.
aiVision.takeSnapshot(aivision::ALL_TAGS);
Brain.Screen.clearScreen();
// Check to make sure an object was detected in the
// snapshot before pulling data.
if (aiVision.objects[0].exists == true) {
Brain.Screen.print(aiVision.objects[0].angle);
}
wait(5, msec);
}
exists#
This property returns a boolean value for whether or not the specified object exists.
To call the exists
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call this property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].exists
.
In this example, the robot checks if an AprilTag is detected before printing its angle to the Brain’s screen.
while (true) {
// Get a snapshot of all AprilTags.
aiVision.takeSnapshot(aivision::ALL_TAGS);
Brain.Screen.clearScreen();
// Check to make sure an object was detected in the
// snapshot before pulling data.
if (aiVision.objects[0].exists == true) {
Brain.Screen.print(aiVision.objects[0].angle);
}
wait(5, msec);
}
score#
The score
property is only available for AI Classifications.
This property returns the confidence score of the specified AI classification. The score ranges from 0% to 100%, indicating the AI Vision Sensor’s level of certainty in its detection accuracy.
To call the score
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call this property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].score
.
objectCount#
The objectCount
method returns the number of objects found in the most recent snapshot.
Returns: An integer representing the number of objects found in the most recent snapshot.
while (true){
// Take a snapshot of the red objects detected by
// the AI Vision Sensor.
aiVision.takeSnapshot(Red);
// Clear the screen/reset so that we can display
// new information.
Brain.Screen.clearScreen();
Brain.Screen.setCursor(1, 1);
// Print the largest detected object's CenterX
// coordinate to the Brain's screen.
Brain.Screen.print("Object Count: %d", aiVision.objectCount);
// Wait 0.5 seconds before repeating the loop and
// taking a new snapshot.
wait(0.5, seconds);
}
tagDetection()#
The tagDetection(enable)
method enables or disables apriltag detection.
Parameters |
Description |
---|---|
enable |
|
Returns: None.
colorDetection()#
The colorDetection(enable, merge)
method enables or disables color and code object detection.
Parameters |
Description |
---|---|
enable |
|
merge |
A boolean value which enables or disables the merging of adjacent color detections. The default is |
Returns: None.
modelDetection()#
The modelDetection(enable)
method enables or disables AI model object, also known as AI Classification detection.
Parameters |
Description |
---|---|
enable |
|
Returns: None.
startAwb()#
The startAwb()
method runs auto white balance.
Returns: None.
set()#
The set(desc)
method sets a new Color Signature or Color Code.
Parameters |
Description |
---|---|
desc |
Returns: None.
timestamp()#
The timestamp()
method requests the timestamp of the last received status packet from the AI Vision Sensor.
Returns: Timestamp of the last status packet as an unsigned 32-bit integer in milliseconds.
installed()#
The installed()
method checks for device connection.
Returns: true
if the AI Vision Sensor is connected. false
if it is not.