AiVision#
Initializing the AiVision Class#
An AI Vision Sensor is created by using the following constructor:
AiVision(port, colors, codes)
This constructor uses three parameter:
Parameter |
Description |
---|---|
|
A valid Smart Port that the AI Vision Sensor is connected to. |
|
Optional. The name of one or more Colordesc objects. |
|
Optional. The name of one or more Codedesc objects. |
# Create a new Color Signature "red" with the Colordesc class.
red = Colordesc(1, 207, 19, 25, 10.00, 0.20)
# Create a new AI Vision Sensor "ai_vision" with the AiVision
# class, with the "red' Colordesc.
ai_vision = AiVision(Ports.PORT1, red)
This ai_vision
AiVision object and red
Colordesc object will be used in all subsequent examples throughout this API documentation when referring to AiVision class methods.
Class Methods#
take_snapshot()#
The take_snapshot(type, count)
method takes the current snapshot visible to the AI Vision Sensor and detects the objects of a given signature, code, or signature id.
Parameters |
Description |
---|---|
type |
|
count |
Optional. The maximum number of objects to obtain. The default is 8. |
Taking a snapshot will create a tuple of all of the detected objects that you specified. For instance, if you wanted to detect a “Blue” Color Signature, and the AI Vision Sensor detected 3 different blue objects, data from all three would be put in the tuple.
Returns: A tuple of of detected objects or an empty tuple if nothing is detected.
There are ten different properties that can be called from the tuple to get data from the specified object.
id
centerX
andcenterY
originX
andoriginY
width
andheight
angle
exists
score
To access an object’s property, use the name of the tuple, and then the object’s index. For example, if the tuple is stored in a variable named vision_objects
you would call the width
property as: vision_objects[0].width
id#
The id
property is only available for AprilTags and AI Classifications.
For an AprilTag, the id
property represents the detected AprilTag(s) ID number.
For AI Classifications, the id
property represents the specific type of AI Classification detected. For more information on what IDs AI Classifications have, go to this article.
To call the id
property, a tuple must be created first using the ai_vision.take_snapshot
command.
After creating a tuple, you can access specific objects and their properties using their index. The tuple is sorted by object area, from largest to smallest, with indices starting at 0.
Note: AprilTags are sorted by their unique IDs in ascending order, not by size. For example, if AprilTags 1, 15, and 3 are detected:
AprilTag 1 will have index 0.
AprilTag 3 will have index 1.
AprilTag 15 will have index 2.
To call this property, use the tuple followed by the index of the detected object to pull the property from. For example: vision_objects[0].id
.
centerX and centerY#
The centerX
and centerY
properties report the center coordinates of the detected object in pixels.
To call these properties, a tuple must be created first using the ai_vision.take_snapshot
command.
After creating a tuple, you can access specific objects and their properties using their index. The tuple is sorted by object area, from largest to smallest, with indices starting at 0.
To call this property, use the tuple followed by the index of the detected object to pull the property from. For example: vision_objects[0].centerX
.
In this example, because the center of the AI Vision Sensor’s view is (160, 120), the robot will turn right until a detected object’s centerX coordinate is greater than 150 pixels, but less than 170 pixels.
while True:
# Get a snapshot of all Blue Color Signatures and store
# it in vision_objects.
vision_objects = ai_vision.take_snapshot(ai_vision_1__Blue)
# Check to make sure an object was detected in the
# snapshot before pulling data.
if vision_objects[0].exists == True
# Check if the object isn't in the center of the
# AI Vision Sensor's view.
if vision_objects[0].centerX > 150 and 170 > vision_objects[0].centerX:
# Keep turning right until the object is in the
# center of the view.
drivetrain.turn(RIGHT)
else:
drivetrain.stop()
wait(5, MSEC)
originX and originY#
The originX
and originY
properties report the coordinates, in pixels, of the top-left corner of the object’s bounding box.
To call these properties, a tuple must be created first using the ai_vision.take_snapshot
command.
After creating a tuple, you can access specific objects and their properties using their index. The tuple is sorted by object area, from largest to smallest, with indices starting at 0.
To call a property, use the objects
method followed by the index of the detected object to pull the property from. For example: vision_objects[0].originX
.
In this example, the a rectangle will be drawn on the Brain’s screen with the exact measurements of the specified object’s bounding box.
while True:
# Get a snapshot of all Blue Color Signatures and store
# it in vision_objects.
vision_objects = ai_vision.take_snapshot(ai_vision_1__Blue)
brain.screen.clear_screen()
# Check to make sure an object was detected in the
# snapshot before pulling data.
if len(vision_objects) > 0:
brain.screen.draw_rectangle(vision_objects[0].originX, vision_objects[0].originY, vision_objects[0].width, vision_objects[0].height)
wait(5, MSEC)
width and height#
The width
and height
properties report the width or height of the object in pixels.
To call these properties, a tuple must be created first using the ai_vision.take_snapshot
command.
After creating a tuple, you can access specific objects and their properties using their index. The tuple is sorted by object area, from largest to smallest, with indices starting at 0.
To call a property, use the objects
method followed by the index of the detected object to pull the property from. For example: vision_objects[0].width
.
In this example, the width of the object is used for navigation. The robot will approach the object until the width has reached a specific size before stopping.
while True:
# Get a snapshot of all Blue Color Signatures and store
# it in vision_objects.
vision_objects = ai_vision.take_snapshot(ai_vision_1__Blue)
# Check to make sure an object was detected in the
# snapshot before pulling data.
if vision_objects[0].exists == True
# Check if the largest object is close to the
# AI Vision Sensor by measuring its width.
if vision_objects[0].width < 250:
# Drive closer to the object until it's wider
# than 250 pixels.
drivetrain.drive(FORWARD)
else:
drivetrain.stop()
wait(5, MSEC)
angle#
The angle
property is only available for Color Codes and AprilTags.
This property reports the detected Color Code’s or AprilTag’s angle.
To call the angle
property, a tuple must be created first using the ai_vision.take_snapshot
command.
After creating a tuple, you can access specific objects and their properties using their index. The tuple is sorted by object area, from largest to smallest, with indices starting at 0.
To call this property, use the tuple followed by the index of the detected object to pull the property from. For example: vision_objects[0].angle
.
In this example, the AprilTag’s angle is printed to the Brain’s screen.
while True:
# Get a snapshot of all Blue Color Signatures and store
# it in vision_objects.
vision_objects = ai_vision.take_snapshot(AiVision.ALL_TAGS)
brain.screen.clear_screen()
# Check to make sure an object was detected in the
# snapshot before pulling data.
if len(vision_objects) > 0:
brain.screen.print(vision_objects[0].angle)
wait(5, MSEC)
exists#
This property returns a boolean value for whether or not the specified object exists.
To call the exists
property, a tuple must be created first using the ai_vision.take_snapshot
command.
After creating a tuple, you can access specific objects and their properties using their index. The tuple is sorted by object area, from largest to smallest, with indices starting at 0.
To call this property, use the tuple followed by the index of the detected object to pull the property from. For example: vision_objects[0].exists
.
In this example, the object is checked to see if it exists before attempting to pull data from it for navigation.
while True:
# Get a snapshot of all Blue Color Signatures and store
# it in vision_objects.
vision_objects = ai_vision.take_snapshot(ai_vision_1__Blue)
# Check to make sure an object was detected in the
# snapshot before pulling data.
if vision_objects[0].exists == True
# Check if the largest object is close to the
# AI Vision Sensor by measuring its width.
if vision_objects[0].width < 250:
# Drive closer to the object until it's wider
# than 250 pixels.
drivetrain.drive(FORWARD)
else:
drivetrain.stop()
wait(5, MSEC)
score#
The score
property is only available for AI Classifications.
This property returns the confidence score of the specified AI classification. The score ranges from 0% to 100%, indicating the AI Vision Sensor’s level of certainty in its detection accuracy.
To call the exists
property, a tuple must be created first using the ai_vision.take_snapshot
command.
After creating a tuple, you can access specific objects and their properties using their index. The tuple is sorted by object area, from largest to smallest, with indices starting at 0.
To call this property, use the tuple followed by the index of the detected object to pull the property from. For example: vision_objects[0].score
.
object_count()#
The object_count()
method returns the amount of detected objects in the last use of the take_snapshot
method.
Returns: An integer as the amount of detected objects in the last use of the take_snapshot
method.
tag_detection()#
The tag_detection(enable)
method enables or disables AprilTag detection.
Parameters |
Description |
---|---|
enable |
|
Returns: None.
color_detection()#
The color_detection(enable, merge)
method enables or disables color and code object detection.
Parameters |
Description |
---|---|
enable |
|
merge |
Optional. A boolean value which enables or disables the merging of adjacent color detections. |
Returns: None.
model_detection(enable)#
The model_detection(enable)
method enables or disables AI model object detection.
Parameters |
Description |
---|---|
enable |
|
Returns: None.
start_awb()#
The start_awb()
method runs auto white balance.
Returns: None.
set()#
The set()
method sets a new Color Signature or Color Code.
Returns: None.
installed()#
The installed()
method checks for device connection.
Returns: True
if the AI Vision Sensor is connected. False
if it is not.
timestamp()#
The timestamp()
method requests the timestamp of the last received status packet from the AI Vision Sensor.
Returns: Timestamp of the last received status packet in milliseconds.