AI Vision Sensing#
The AI Vision Sensor must be connected to your V5 Brain and configured in VEXcode V5 before it can be used. Go here for information about Getting Started with the AI Vision Sensor with VEX V5
Refer to these articles for more information about using the AI Vision Sensor.
For more detailed information about using the AI Vision Sensor with Blocks in VEXcode V5, read Coding with the AI Vision Sensor in VEXcode V5 Blocks.
Take AI Vision Snapshot#
The Take Snapshot block is used to capture the current image from the AI Vision Sensor to be processed and analyzed for Visual Signatures.
A Visual Signature can be a Color Signature, Color Code, AprilTag, or AI Classification.
take a [AIVision5 v] snapshot of [AprilTags v]
A snapshot is required first before using any other AI Vision Sensor blocks.
Choose which AI Vision Sensor to use.
Select what Visual Signature the AI Vision Sensor should take a snapshot of.
AprilTags.
AI Classifications.
A configured Color Signature or Color Code.
When a snapshot is taken with the AI Vision Sensor, it creates an array with all of the detected objects and their properties stored inside.
It’s also important to take a new snapshot everytime you want to use data from the AI Vision Sensor, so your robot isn’t using outdated data from an old snapshot in the array.
In this example, every .25 seconds, the AI Vision Sensor will take a snapshot of the RedBox color signature. This makes sure that the data the robot is using is getting constantly updated.
Before any data is pulled from the snapshot, the AI Vision Sensor Object Exists? block is used to ensure that at least one object was detected in the snapshot. This makes sure that the robot isn’t trying to pull data from an empty array.
If the AI Vision Sensor has detected at least one object, it will print the CenterX coordinate of the largest detected object to the Brain’s screen.
when started :: hat events
forever
[Take a snapshot looking for the RedBox Color Signature.]
take a [AIVision5 v] snapshot of [RedBox v]
clear all rows on [Brain v]
set cursor to row (1) column (1) on Brain :: #9A67FF
[Check if any object matching the RedBox Color Signature were detected.]
if<[AIVision5 v] object exists? > then
[If any objects were detected, set the object item to 1.]
set [AIVision5 v] object item to (1)
[Print object 1's CenterX coordinate on the Brain Screen.]
print [CenterX:] on [Brain v] ▶
print ([AIVision5 v] object [originX v]) on [Brain v] ◀ and set cursor to next row
end
[Repeat the process every .25 seconds]
wait (0.25) seconds
end
AI Classification Is#
The AI Classification Is block is used to report if the specified AI Classification has been detected.
<[AIVision1 v] AI classification is [BlueBall v] ? :: #5cb0d6>
The Take AI Vision Snapshot block is required first for AI Classifications before using the AI Classification Is block.
The Take AI Vision Snapshot block reports True
when the AI Vision Sensor has detected the specified AI Classification.
The Take AI Vision Snapshot block reports False
when the AI Vision Sensor has not detected the specified AI Classification.
Choose which AI Vision Sensor to use.
Choose which AI Classification to detect. This can change depending on what detection model you are using.
In this example, the AI Vision Sensor will take a snapshot of all AI Classifications before checking if a Blue Ball was detected or not. If a Blue Ball was detected, it will print a message to the Print Console.
when started :: hat events
take a [AIVision1 v] snapshot of [AI Classifications v]
if <[AIVision1 v] AI classification is [BlueBall v] ? :: #5cb0d6> then
print [Blue Ball detected!] on [Brain v] ▶
end
Detected AprilTag Is#
The Detected AprilTag Is block is used to report if the specified AprilTag is detected. For more information on what AprilTags are and how to enable their detection, go here.
<[AIVision1 v] detected AprilTag is (1) ? :: #5cb0d6>
The Take AI Vision Snapshot block is required first for AprilTags before using the Detected AprilTag Is block.
The Detected AprilTag Is block reports True
when the AI Vision Sensor has detected the specified AprilTag.
The Detected AprilTag Is block reports False
when the AI Vision Sensor has not detected the specified AprilTag.
Choose which AI Vision Sensor to use.
In this example, the AI Vision Sensor will take a snapshot of all AprilTags before checking if the AprilTag with the ID “3” was detected. If that specific AprilTag was detected, it will print a message to the Print Console.
when started :: hat events
take a [AIVision1 v] snapshot of [AprilTags v]
if <[AIVision1 v] detected AprilTag is (3) ? :: #5cb0d6> then
print [AprilTag 3 detected!] on [Brain v] ▶
end
Set AI Vision Sensor Object Item#
The Set AI Vision Sensor Object Item block is used to set the object item (of the object you want to learn more information about) from the objects detected. By default, the Object item is set to 1 at the start of a project.
When multiple objects are detected, they will be stored from largest to smallest, with object item 1 being the largest.
Note: AprilTags are not by their size, but are sorted by their unique IDs in ascending order. For example, if AprilTags 1, 15, and 3 are detected:
AprilTag 1 will have index 0.
AprilTag 3 will have index 1.
AprilTag 15 will have index 2.
set [AIVision5 v] object item to (1)
The Take AI Vision Snapshot block is required first before the Set AI Vision Sensor Object Item block can be used.
Choose which AI Vision Sensor to use.
In this example, every .25 seconds, the AI Vision Sensor will take a snapshot of the RedBox color signature.
Then it will check if an object was detected, it will print the the CenterX coordinate of the largest object (indexed at 1) to the Brain’s screen.
when started :: hat events
forever
[Take a snapshot looking for the RedBox Color Signature.]
take a [AIVision5 v] snapshot of [RedBox v]
clear all rows on [Brain v]
set cursor to row (1) column (1) on Brain :: #9A67FF
[Check if any object matching the RedBox Color Signature were detected.]
if<[AIVision5 v] object exists? > then
[If any objects were detected, set the object item to 1.]
set [AIVision5 v] object item to (1)
[Print object 1's CenterX coordinate on the Brain Screen.]
print [CenterX:] on [Brain v] ▶
print ([AIVision5 v] object [originX v]) on [Brain v] ◀ and set cursor to next row
end
[Repeat the process every .25 seconds]
wait (0.25) seconds
end
AI Vision Sensor Object Count#
The AI Vision Sensor Object Count block is used to report how many objects the AI Vision Sensor detects that match the specified Visual Signature.
A Visual Signature can be a Color Signature, Color Code, AprilTag, or AI Classification.
([AIVision5 v] object count )
The Take AI Vision Snapshot block is required first before the AI Vision Sensor Object Count block can be used.
Choose which AI Vision Sensor to use.
In this example, every .25 seconds, the AI Vision Sensor will take a snapshot of the RedBox color signature.
Then it will check if an object was detected before printing how many objects were detected.
when started :: hat events
forever
[Take a snapshot looking for the RedBox Color Signature.]
take a [AIVision5 v] snapshot of [RedBox v]
clear all rows on [Brain v]
set cursor to row (1) column (1) on Brain :: #9A67FF
[Check if any object matching the RedBox Color Signature were detected.]
if<[AIVision5 v] object exists? > then
[If any objects were detected, print how many were detected.]
print [# of Objects Detected: :] on [Brain v] ▶
print ([AIVision5 v] object count) on [Brain v] ◀ and set cursor to next row
end
[Repeat the process every .25 seconds]
wait (0.25) seconds
end
AI Vision Sensor Object Exists?#
The AI Vision Sensor Object Exists? block is used to report if the AI Vision Sensor detects a Visual Signature.
A Visual Signature can be a Color Signature, Color Code, AprilTag, or AI Classification.
<[AIVision5 v] object exists?>
The Take AI Vision Snapshot block is required first before the AI Vision Sensor Object Exists? block can be used.
The AI Vision Sensor Object Exists? block reports True
when the AI Vision Sensor has detected an object.
The AI Vision Sensor Object Exists? block reports False
when the AI Vision Sensor has not detected an object.
Choose which AI Vision Sensor to use.
In this example, every .25 seconds, the AI Vision Sensor will take a snapshot of the RedBox color signature.
Then it will check if an object was detected before printing how many objects were detected.
when started :: hat events
forever
[Take a snapshot looking for the RedBox Color Signature.]
take a [AIVision5 v] snapshot of [RedBox v]
clear all rows on [Brain v]
set cursor to row (1) column (1) on Brain :: #9A67FF
[Check if any object matching the RedBox Color Signature were detected.]
if<[AIVision5 v] object exists? > then
[If any objects were detected, print how many were detected.]
print [# of Objects Detected: :] on [Brain v] ▶
print ([AIVision5 v] object count) on [Brain v] ◀ and set cursor to next row
end
[Repeat the process every .25 seconds]
wait (0.25) seconds
end
AI Vision Sensor Object#
The AI Vision Sensor Object block is used to report information about a specified Visual Signature from the AI Vision Sensor.
A Visual Signature can be a Color Signature, Color Code, AprilTag, or AI Classification.
([AIVision5 v] object [width v])
The Take AI Vision Snapshot block is required first before the AI Vision Sensor Object block can be used.
Choose which AI Vision Sensor to use.
Choose which property to report from the AI Vision Sensor:
width - How wide the object is in pixels, from 0 - 320 pixels.
height - How tall the object is in pixels, from 0 - 240 pixels.
centerX - The center X coordinate of the detected object, from 0 - 320 pixels.
centerY - The center Y coordinate of the detected object, from 0 - 240 pixels.
originX - The X coordinate of the object’s leftmost corner, from 0 - 320 pixels.
originY - The Y coordinate of the object’s leftmost corner, from 0 - 240 pixels.
angle - The angle of the detected Color Code or ApriTag, from 0 - 360 degrees.
tagID - The detected AprilTag’s identification number.
score - The confidence score (up to 100%) for AI Classifications. This score indicates how confident the model is in the detected AI Classification. A higher score indicates greater confidence in the accuracy of the AI Classification.
For more examples of using object properties, go here.
In this example, every .25 seconds, the AI Vision Sensor will take a snapshot of the RedBox color signature.
Then it will check if an object was detected before it will print the the CenterX coordinate of the largest object (indexed at 1) to the Brain’s screen.
when started :: hat events
forever
[Take a snapshot looking for the RedBox Color Signature.]
take a [AIVision5 v] snapshot of [RedBox v]
clear all rows on [Brain v]
set cursor to row (1) column (1) on Brain :: #9A67FF
[Check if any objects matching the RedBox Color Signature were detected.]
if<[AIVision5 v] object exists? > then
[If any objects were detected, set the object item to 1.]
set [AIVision5] object item to (1)
[Print object 1's CenterX coordinate on the Brain Screen.]
print [CenterX:] on [Brain v] ▶
print ([AIVision5 v] object [originX v]) on [Brain v] ◀ and set cursor to next row
end
[Repeat the process every .25 seconds]
wait (0.25) seconds
end