Vision#
The Vision Sensor for VEX V5 detects and tracks Color Signatures and Color Codes. This allows the Vision Sensor to analyze its surroundings and react based on detected visual data. Below is a list of all methods:
Methods – Get data from the Vision Sensor.
take_snapshot – Captures data for a specific Color Signature or Color Code.
largest_object – Immediately select the largest object from the snapshot.
installed – Whether the Vision Sensor is connected to the V5 Brain.
Properties – Object data returned from take_snapshot.
.exists – Whether the object exists in the current detection as a Boolean.
.width – Width of the detected object in pixels.
.height – Height of the detected object in pixels.
.centerX – X position of the object’s center in pixels.
.centerY – Y position of the object’s center in pixels.
.angle – Orientation of the Color Code in degrees.
.originX – X position of the object’s top-left corner in pixels.
.originY – Y position of the object’s top-left corner in pixels.
Constructors – Manually initialize and configure the Vision Sensor.
In VEXcode, the initialization of the Vision Sensor and its configured Color Signatures and Color Codes is done automatically. For the examples below, the configured Vision Sensor will be named vision_1
. To manually initialize and construct a Vision Sensor and its Color Signatures and Color Codes, refer to the Constructors section on this page.
Methods#
take_snapshot#
take_snapshot
filters the data from the Vision Sensor frame to return a tuple. The Vision Sensor can detect configured Color Signatures and Color Codes.
Color Signatures and Color Codes must be configured first in the Vision Utility before they can be used with this method.
The tuple stores objects ordered from largest to smallest by width, starting at index 0. Each object’s properties can be accessed using its index. An empty tuple is returned if no matching objects are detected.
Usage:
vision_1.take_snapshot(SIGNATURE)
Parameters |
Description |
---|---|
|
What signature to get data of. This is the name of the Vision Sensor, two underscores, and then the Color Signature’s or Color Code’s name. For example: |
# Move forward if a red object is detected
while True:
red_box = vision_1.take_snapshot(vision_1__RED_BOX)
if red_box:
drivetrain.drive_for(FORWARD, 10, MM)
wait(5, MSEC)
Color Signatures#
A color signature is a unique color that the Vision Sensor can recognize. These signatures allow the sensor to detect and track objects based on their color. Once a Color Signature is configured, the sensor can identify objects with that specific color in its field of view. Color signatures are used with take_snapshot to process and detect colored objects in real-time.
In order to use a configured Color Signature in a project, its name must be the name of the Vision Sensor, two underscores, and then the Color Signature’s name. For example: vision_1__RED_BOX
.
# Display if any objects match the RED_BOX signature
while True:
brain.screen.set_cursor(1, 1)
brain.screen.clear_row(1)
# Change to any configured Color Signature
red_box = vision_1.take_snapshot(vision_1__RED_BOX)
if red_box:
brain.screen.print("Color signature detected!")
wait(100, MSEC)
Color Codes#
A color code is a structured pattern made up of color signatures arranged in a specific order. These codes allow the Vision Sensor to recognize predefined patterns of colors. Color codes are useful for identifying complex objects or creating unique markers for autonomous navigation.
In order to use a configured Color Code in a project, its name must be the name of the Vision Sensor, two underscores, and then the Color Code’s name. For example: vision_1__BOX_CODE
.
# Display if any objects match the BOX_CODE code
while True:
brain.screen.set_cursor(1, 1)
brain.screen.clear_row(1)
# Change to any configured Color Code
box_code = vision_1.take_snapshot(vision_1__BOX_CODE)
if box_code:
brain.screen.print("Color code detected!")
wait(100, MSEC)
largest_object#
largest_object
retrieves the largest detected object to get data from in the tuple returned from the latest use of take_snapshot.
This method can be used to always get the largest object from a tuple without specifying an index.
Usage:
vision_1.largest_object
# Turn slowly until the largest object is centered in
# front of the Vision Sensor
drivetrain.set_turn_velocity(10, PERCENT)
drivetrain.turn(RIGHT)
while True:
red_box = vision_1.take_snapshot(vision_1__RED_BOX)
if red_box:
if 140 < vision_1.largest_object().centerX < 180:
drivetrain.stop()
wait(10,MSEC)
installed#
installed
returns a Boolean indicating whether the Vision Sensor is currently connected to the V5 Brain.
True
– The Vision Sensor is connected to the V5 Brain.False
– The Vision Sensor is not connected to the V5 Brain.
Parameters |
Description |
---|---|
This method has no parameters. |
# Display a message if the Vision Sensor is connected
if vision_1.installed():
brain.screen.print("Vision Sensor Installed!")
Properties#
There are eight properties that are included with each object stored in a tuple after take_snapshot is used.
Some property values are based off of the detected object’s position in the Vision Sensor’s view at the time that take_snapshot
was used. The Vision Sensor has a resolution of 316 by 212 pixels.
.exists#
.exists
returns a Boolean indicating if the index exists in the tuple or not.
True
: The index exists.False
: The index does not exist.
# Check if at least one red objects is detected
# You will receive an error if no objects are detected
while True:
brain.screen.clear_screen()
brain.screen.set_cursor(1, 1)
red_objects = vision_1.take_snapshot(vision_1__RED_BOX)
if red_objects:
if red_objects[0].exists:
brain.screen.print("At least 1")
else:
brain.screen.print("No red objects")
wait(0.5, SECONDS)
.width#
.width
returns the width of the detected object in pixels, which is an integer between 1 and 316.
# Move towards a blue object until its width is
# larger than 100 pixels
while True:
blue_box = vision_1.take_snapshot(vision_1__BLUE_BOX)
if blue_box:
if blue_box[0].width < 100:
drivetrain.drive_for(FORWARD, 10, MM)
else:
drivetrain.stop()
wait(50, MSEC)
.height#
.height
returns the height of the detected object in pixels, which is an integer between 1 and 212.
# Move towards a blue object until its height is
# larger than 100 pixels
while True:
blue_box = vision_1.take_snapshot(vision_1__BLUE_BOX)
if blue_box:
if blue_box[0].height < 100:
drivetrain.drive_for(FORWARD, 10, MM)
else:
drivetrain.stop()
wait(50, MSEC)
.centerX#
.centerX
returns the x-coordinate of the detected object’s center in pixels, which is an integer between 0 and 316.
# Turn slowly until the largest blue object is centered
# in front of the Vision Sensor.
drivetrain.set_turn_velocity(10, PERCENT)
drivetrain.turn(RIGHT)
while True:
blue_box = vision_1.take_snapshot(vision_1__BLUE_BOX)
if blue_box:
if 140 < vision_1.largest_object().centerX < 180:
drivetrain.stop()
wait(10,MSEC)
.centerY#
.centerY
returns the y-coordinate of the detected object’s center in pixels, which is an integer between 0 and 212.
# Move towards a blue object until its
# center y-coordinate is more than 140 pixels
while True:
blue_box = vision_1.take_snapshot(vision_1__BLUE_BOX)
if blue_box:
if blue_box[0].centerY < 140:
drivetrain.drive(FORWARD)
else:
drivetrain.stop()
wait(50, MSEC)
.angle#
.angle
returns the orientation of the detected object in degrees, which is an integer between 0 and 316.
# Turn left or right depending on how a
# configured box code is rotated
while True:
box_code = vision_1.take_snapshot(vision_1__BOX_CODE)
if box_code:
if 70 < box_code[0].angle < 110:
drivetrain.turn_for(RIGHT, 45, DEGREES)
elif 250 < box_code[0].angle < 290:
drivetrain.turn_for(LEFT, 45, DEGREES)
else:
drivetrain.stop()
wait(50, MSEC)
.originX#
.originX
returns the x-coordinate of the top-left corner of the detected object’s bounding box in pixels, which is an integer between 0 and 316.
# Display if a red object is to the
# left or the right
while True:
brain.screen.clear_screen()
brain.screen.set_cursor(1,1)
red_box = vision_1.take_snapshot(vision_1__RED_BOX)
if red_box:
if red_box[0].originX < 160:
brain.screen.print("To the left!")
else:
brain.screen.print("To the right!")
wait(50, MSEC)
.originY#
.originY
returns the y-coordinate of the top-left corner of the detected object’s bounding box in pixels, which is an integer between 0 and 212.
# Display if a red object is close or far
# from the robot
while True:
brain.screen.clear_screen()
brain.screen.set_cursor(1,1)
red_box = vision_1.take_snapshot(vision_1__RED_BOX)
if red_box:
if red_box[0].originY < 80:
brain.screen.print("Far")
else:
brain.screen.print("Close")
wait(50, MSEC)
Constructors#
Constructors are used to manually create Vision
, Signature
, and Code
objects, which are necessary for configuring the Vision Sensor outside of VEXcode.
For the examples below, the configured Vision Sensor will be named vision_1
, and the configured Color Signature objects, such as RED_BOX
, will be used in all subsequent examples throughout this API documentation when referring to Vision
class methods.
Vision Sensor#
Vision
creates a Vision Sensor.
Usage
Vision(port, brightness, sigs)
Parameters |
Description |
---|---|
|
A valid Smart Port that the Vision Sensor is connected to. |
|
Optional. The brightness value for the Vision Sensor, from 1 to 100. |
|
Optional. The name of one or more Color Signature or Color Code objects. |
# Create a new Signature "RED_BOX" with the Colordesc class
RED_BOX = Signature(1, -3911, -3435, -3673,10879, 11421, 11150,2.5, 0)
# Create a new Vision Sensor "vision_1" with the Vision
# class, with the "RED_BOX" Signature.
vision_1 = Vision(Ports.PORT1, 100, RED_BOX)
# Move forward if a red object is detected
while True:
red_object = vision_1.take_snapshot(RED_BOX)
if red_object:
drivetrain.drive_for(FORWARD, 10, MM)
wait(5, MSEC)
Color Signature#
Signature
creates a Color Signature. Up to seven different Color Signatures can be stored on a Vision Sensor at once.
Usage:
Signature(index, uMin, uMax, uMean, vMin, vMax, vMean, rgb, type)
Parameter |
Description |
---|---|
|
The |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
|
The value from |
In order to obtain the values to create a Color Signature, go to the Vision Utility. Once a Color Signature is configured, copy the parameter values from the Configuration window.
# Create a new Signature RED_BOX with the Colordesc class
RED_BOX = Signature(1, 10121, 10757, 10439,-1657, -1223, -1440,2.5, 1)
# Create a new Vision Sensor "vision_1" with the Vision
# class, with the RED_BOX Signature.
vision_1 = Vision(Ports.PORT1, 100, RED_BOX)
# Move forward if a red object is detected
while True:
red_object = vision_1.take_snapshot(RED_BOX)
if red_object:
drivetrain.drive_for(FORWARD, 10, MM)
wait(5, MSEC)
Color Code#
Code
creates a Color Code. It requires at least two already defined Color Signatures in order to be used. Up to eight different Color Codes can be stored on a Vision Sensor at once.
Usage:
Code(sig1, sig2, sig3, sig4, sig5)
Parameter |
Description |
---|---|
|
A previously created Color Signature. |
|
A previously created Color Signature. |
|
Optional. A previously created Color Signature. |
|
Optional. A previously created Color Signature. |
|
Optional. A previously created Color Signature. |
# Create two new Signatures for a red and blue box
RED_BOX = Signature(1, 10121, 10757, 10439,-1657, -1223, -1440, 2.5, 1)
BLUE_BOX = Signature(2, -4443, -3373, -3908,6253, 7741, 6997, 2.5, 1)
# Create a Color Code for a red box to the left of a blue box
RED_BLUE = Code(RED_BOX, BLUE_BOX)
# Create a new Vision Sensor "vision_1" with the Vision
# class, with the red_box and blue_box Signatures.
vision_1 = Vision(Ports.PORT1, 100, RED_BOX, BLUE_BOX)
# Display a message if Color Code is detected
while True:
brain.screen.set_cursor(1, 1)
brain.screen.clear_row(1)
# Change to any configured Color Code
box_code = vision_1.take_snapshot(RED_BLUE)
if box_code:
brain.screen.print("Color code detected!")
wait(100, MSEC)