Vision#

Introduction#

The Vision category includes blocks that allow your robot to interact with visual information using a VEX IQ Vision Sensor. These blocks let the robot take snapshots of its surroundings, identify objects by color signature, and report properties like size, location, and angle.

Below is a list of available blocks:

take vision snapshot#

The take vision snapshot block will capture the current image from the Vision Sensor to be processed and analyzed for color signatures or color codes.

Colors signatures and color codes must be configured first in the Vision Sensor Utility before they can be used with this block.

Note: A snapshot is required first before using any other Vision Sensor blocks.

The dataset stores objects ordered from largest to smallest by width, starting at index 0. Each object’s properties can be accessed using vision object property block. An empty dataset is returned if no matching objects are detected.

    take a [Vision1 v] snapshot of [SELECT_A_SIG v]

Parameter

Description

vision sensor

The Vision Sensor to use, configured in the Devices window.

signature

Filters the dataset to only include data of the specified color signature or color code.

Example

set vision object index#

The set vision object index block sets which item in the dataset to use.

    set [Vision1 v] object index to [1]

Parameter

Description

vision sensor

The Vision Sensor to use, configured in the Devices window.

index

The number of the item in the dataset to use.

Example

vision object count#

The vision object count block returns the number of detected objects in the dataset as an integer.

    ([Vision1 v] object count)

Parameters

Description

vision sensor

The Vision Sensor to use, configured in the Devices window.

Example

vision object exists?#

The vision object exists? block returns a Boolean indicating whether a specified color signature or color code is detected in the dataset.

  • True – The dataset includes the color signature or color code.

  • False – The dataset does not include the color signature or color code.

    <[Vision1 v] object exists?>

Parameters

Description

vision sensor

The Vision Sensor to use, configured in the Devices window.

Example

vision object property#

There are nine properties that are included with each object (shown below) stored after the take vision snapshot block is used.

    ([Vision1 v] object [width v])

Some property values are based off of the detected color signature’s or color code’s position in the Vision Sensor’s view at the time that the take vision snapshot block was used. The Vision Sensor has a resolution of 316 by 212 pixels.

Parameter

Description

vision sensor

The Vision Sensor to use, configured in the Devices window.

property

Which property of the detected object to use:

width#

width returns the width of the detected color signature in pixels as an integer from 0 to 316.

The Vision object property stack block with its parameter set to width.#
  [Vision1 v] object [width v]

Example

  when started :: hat events
  [Move towards a blue barrel until its width is larger than 100 pixels.]
  forever
  get [blue barrel v] data from AI Vision
  if <AI Vision object exists?> then
  if <(AI Vision object [width v]) [math_less_than v] [100]> then
  move [forward v]
  end
  else
  stop all movement

height#

height returns the height of the detected color signature in pixels as an integer from 0 to 212.

The Vision object property stack block with its parameter set to height.#
  [Vision1 v] object [height v]

Example

  when started :: hat events
  [Move towards a blue barrel until its height is larger than 100 pixels.]
  forever
  get [blue barrel v] data from AI Vision
  if <AI Vision object exists?> then
  if <(AI Vision object [height v]) [math_less_than v] [100]> then
  move [forward v]
  end
  else
  stop all movement

centerX#

centerX returns the x-coordinate of the center of the detected color signature in pixels as an integer from 0 to 316.

The Vision object property stack block with its parameter set to centerX.#
  [Vision1 v] object [centerX v]

Example

  when started :: hat events
  [Turn slowly until a blue barrel is centered in front of the robot.]
  set turn velocity to [30] %
  turn [right v]
  forever
  get [blue barrel v] data from AI Vision
  if <AI Vision object exists?> then
  if <[140] [math_less_than v] (AI Vision object [centerX v]) [math_less_than v] [180]> then
  stop all movement

centerY#

centerY returns the y-coordinate of the center of the detected objcolor signature in pixels as an integer from 0 to 212.

The Vision object property stack block with its parameter set to centerY.#
  [Vision1 v] object [centerY v]

Example

  when started :: hat events
  [Move towards a blue barrel until its center y-coordinate is more than 140 pixels.]
  forever
  get [blue barrel v] data from AI Vision
  if <AI Vision object exists?> then
  if <(AI Vision object [centerY v]) [math_less_than v] [140]> then
  move [forward v]
  end
  else
  stop all movement

angle#

angle returns the orientation of the detected color code as an integer in degrees from 0 to 180.

The Vision object property stack block with its parameter set to rotation.#
  [Vision1 v] object [angle v]

Example

  when started :: hat events
  [Slide left or right depending on how the Color Code is rotated.]
  forever
  get [Red_Blue v] data from AI Vision
  if <AI Vision object exists?> then
  if <[50] [math_less_than v] (AI Vision object [rotation v]) [math_less_than v] [100]> then
  move [right v]
  else if <[270] [math_less_than v] (AI Vision object [rotation v]) [math_less_than v] [330]> then
  move [left v]
  else
  stop all movement
  end
  else
  stop all movement