AI Vision Sensor#

Introduction#

The AI Vision Sensor can detect and track objects, colors, and AprilTags. This allows the robot to analyze its surroundings, follow objects, and react based on detected visual data.

Below is a list of available blocks:

Actions – Capture data from the AI Vision Sensor for a selected signature.

  • take snapshot – Captures data for a specific object type, such as colors, pre-trained objects, or AprilTags.

Settings – Choose which object to interact with.

Values – Access and use the captured data.

Actions#

take snapshot#

The take snapshot block filters data from the AI Vision Sensor frame. The AI Vision Sensor can detect signatures that include pre-trained objects, AprilTags, or configured colors and color codes.

Colors and color codes must be configured first in the AI Vision Utility before they can be used with this block.

The dataset stores objects ordered from largest to smallest by width, starting at index 0. Each object’s properties can be accessed using AI Vision object property block. An empty dataset is returned if no matching objects are detected.

The Take snapshot stack block.#
  take a [AIVision1 v] snapshot of [SELECT_A_SIG v]

Parameter

Description

signature

Filters the dataset to only include data of the given signature. Available signatures are:

Note: For AprilTag or AI Classification options to appear, their detection must be enabled in the AI Vision Utility.

Example

Example coming soon!

AI Classifications#

The AI Vision Sensor can detect different objects under certain AI Classifications. Depending on the AI Classification model selected when configuring the AI Vision Sensor in the Devices window, different objects will be detected. The currently available models are:

Classroom Objects

  • BlueBall

  • GreenBall

  • RedBall

  • BlueRing

  • GreenRing

  • RedRing

  • BlueCube

  • GreenCube

  • RedCube

V5RC Push Back

  • BlueBlock

  • RedBlock

V5RC High Stakes

  • MobileGoal

  • RedRing

  • BlueRing

Color Signatures#

A Color Signature is a unique color that the AI Vision Sensor can recognize. These signatures allow the sensor to detect and track objects based on their color. Once a Color Signature is configured, the sensor can identify objects with that specific color in its field of view.

Color Signatures are used in the take snapshot block to process and detect colored objects in real-time. Up to 7 Color Signatures can be configured at a time.

The AI Vision Utility showing a connected vision sensor detecting two colored objects. The left side displays a live camera feed with a blue box on the left and a red box on the right, each outlined with white bounding boxes. Black labels display their respective names, coordinates, and dimensions. The right side contains color signature settings, with sliders for hue and saturation range for both the red and blue boxes. Buttons for adding colors, freezing video, copying, and saving the image are at the bottom, along with a close button in the lower right corner.

Example

Example coming soon!

Color Codes#

A Color Code is a structured pattern made up of 2 to 4 Color Signatures arranged in a specific order. These codes allow the AI Vision Sensor to recognize predefined patterns of colors.

Color Codes are particularly useful for identifying complex objects, aligning with game elements, or creating unique markers for autonomous navigation. Up to 8 Color Codes can be configured at a time.

The AI Vision Utility interface shows a connected vision sensor detecting two adjacent objects, a blue box on the left and a red box on the right, grouped together in a single white bounding box labeled BlueRed. Detection information includes angle (A:11°), coordinates (X:143, Y:103), width (W:233), and height (H:108). On the right panel, three color signatures are listed: Red_Box, Blue_Box, and BlueRed, with adjustable hue and saturation ranges. The BlueRed signature combines the Blue_Box and Red_Box. Below the video feed are buttons labeled Freeze Video, Copy Image, Save Image, and Close.

Example

Example coming soon!

Settings#

set AI Vision object item#

The set AI Vision object item block sets which item in the dataset to use.

The Set AI Vision object item stack block.#
  set [AIVision1 v] object item to (1)

Parameters

Description

item

The number of the item in the dataset to use.

Example

Example coming soon!

Values#

AI Vision object exists?#

The AI Vision object exists block returns a Boolean indicating whether any object is detected in the dataset.

  • True – The dataset includes a detected object.

  • False – The dataset does not include any detected objects.

The AI Vision object exists Boolean block.#
  <[AIVision1 v] object exists?>

Parameters

Description

This block has no parameters.

Example

Example coming soon!

AI Vision object is?#

The AI Vision object is? block returns a Boolean indicating whether a detected object matches a specific classification.

  • True – The item in the dataset is the specific object.

  • False – The item in the dataset is not the specific object.

The AI Vision AI Classification is object Boolean block.#
  <[AIVision1 v] object is [BlueBall v]?>

Parameter

Description

object

Which AI Classification to compare the item to.

Example

Example coming soon!

AI Vision object is AprilTag ID?#

The AI Vision object is AprilTag ID? block returns a Boolean indicating whether a detected AprilTag matches a specific ID.

  • True – The AprilTag ID is the number.

  • False – The AprilTag ID is not the number.

The AI Vision detected AprilTag is Boolean block.#
  <[AIVision1 v] object is AprilTag [1] ?>

Parameters

Description

AprilTag number

The number to compare against the detected AprilTag’s ID number.

Example:

Example coming soon!

AI Vision object count#

The AI Vision object count block returns the number of detected objects in the dataset as an integer.

The Set AI Vision object item stack block.#
  ([AIVision1 v] object count)

Parameters

Description

This block has no parameters.

Example

Example coming soon!

AI Vision object property#

There are nine properties that are included with each object (shown below) stored after the take snapshot block is used.

The AI Vision object property reporter block.#
  ([AIVision1 v] object [width v])

Some property values are based off of the detected object’s position in the AI Vision Sensor’s view at the time that the take snapshot block was used. The AI Vision Sensor has a resolution of 320 by 240 pixels.

Parameter

Description

property

Which property of the detected object to use:

width#

width returns the width of the detected object in pixels as an integer from 1 to 320.

The AI Vision object property stack block with its parameter set to width.#
  ([AIVision1 v] object [width v])

Example

Example coming soon!

height#

height returns the height of the detected object in pixels as an integer from 1 to 240.

The AI Vision object property stack block with its parameter set to height.#
  ([AIVision1 v] object [height v])

Example

Example coming soon!

centerX#

centerX returns the x-coordinate of the center of the detected object in pixels as an integer from 0 to 320.

The AI Vision object property stack block with its parameter set to centerX.#
  ([AIVision1 v] object [centerX v])

Example

Example coming soon!

centerY#

centerY returns the y-coordinate of the center of the detected object in pixels as an integer from 0 to 240.

The AI Vision object property stack block with its parameter set to centerY.#
  ([AIVision1 v] object [centerY v])

Example

Example coming soon!

angle#

angle returns the orientation of the detected Color Code or AprilTag as an integer in degrees from 0 to 359.

The AI Vision object property stack block with its parameter set to angle.#
  ([AIVision1 v] object [angle v])

Example

Example coming soon!

originX#

originX returns the x-coordinate of the top-left corner of the detected object’s bounding box in pixels as an integer from 0 to 320.

The AI Vision object property stack block with its parameter set to originX.#
  ([AIVision1 v] object [originX v])

Example

Example coming soon!

originY#

originY returns the y-coordinate of the top-left corner of the detected object’s bounding box in pixels as an integer from 0 to 240.

The AI Vision object property stack block with its parameter set to originY.#
  ([AIVision1 v] object [originY v])

Example

Example coming soon!

tagID#

tagID returns the identification number of the detected AprilTag as an integer.

The AI Vision object property stack block with its parameter set to tagID.#
  ([AIVision1 v] object [tagID v])

Example

Example coming soon!

score#

score returns how confident the model is in the detected AI Classification as a percentage from 70% to 100%.

The AI Vision object property stack block with its parameter set to tagID.#
  ([AIVision1 v] object [score v])

Example

Example coming soon!