Vision#

Introduction#

The VEX AIR Drone has two Vision Sensors that detect and track AprilTag IDs. These allow the drone to analyze its surroundings, follow objects, and react based on detected visual data.

Below is a list of all available blocks:

Actions — Control the Vision feed and capture object data.

Settings — Adjust which detected objects are accessed.

Values — Retrieve object presence, classification, and properties.

Actions#

get object data#

The get object data block filters data from the Vision Sensor’s frame. The Vision Sensors can detect AprilTag IDs.

The dataset stores objects ordered from largest to smallest by width, starting at index 0. Each object’s properties can be accessed using Vision object property block. An empty dataset is returned if no matching objects are detected.

The Get object data stack block.#
  get [all AprilTags v] data from Vision [forward v] camera

Parameter

Description

signature

Filters the dataset to only include data of the given signature. Available signatures are:

  • all AprilTags

Example

  when started :: hat events
  [Climb upward when an AprilTag ID is detected.]
  take off to [300] [mm v] ▶
  forever
  get [all AprilTags v] data from Vision [forward v] camera
  if <Vision object exists?> then
  climb [up v] for [200] [mm v] ▶

Settings#

set Vision object item#

The set Vision object item block sets which item in the dataset to use.

The Set AI Vision object item stack block.#
  set Vision object item to [1]

Parameters

Description

item

The number of the item in the dataset to use.

Example

  when started :: hat events
  [Display the smallest-appearing AprilTag ID.]
  forever
  get [all AprilTags v] data from Vision [forward v] camera
  clear console
  if <Vision object exists?> then
  set Vision object item to (Vision object count)
  print (Vision object [id v]) on console ▶

Values#

Vision object exists?#

The Vision object exists? block reports if the specified Vision Sensor currently detects any objects. This block returns a Boolean value:

  • True — The specified Vision Sensor detects an object.

  • False — The specified Vision Sensor does not detect an object.

Theshow or hide AI Vision dashboard stack block.#
  <Vision object exists?>

Parameter

Description

This block has no parameters.

Example

  when started :: hat events
  [Climb upward when an AprilTag ID is detected.]
  take off to [300] [mm v] ▶
  forever
  get [all AprilTags v] data from Vision [forward v] camera
  if <Vision object exists?> then
  climb [up v] for [200] [mm v] ▶

Vision object is AprilTag ID#

The Vision object is AprilTag ID? block reports if the AprilTag is a specific ID number. This block returns a Boolean value:

  • True — The AprilTag ID is the number.

  • False — The AprilTag ID is not the number.

The AI Vision detected AprilTag is Boolean block.#
  <Vision object is AprilTag [1]>

Parameters

Description

AprilTag number

The number to compare against the detected AprilTag’s ID number from 0 to 37.

Example

  when started :: hat events
  [Display when the target AprilTag ID is detected.]
  forever
  get [all AprilTags v] data from Vision [forward v] camera
  clear console
  if <Vision object exists?> then
  if <Vision object is AprilTag [21]> then
  print [Target found!] on console ▶
  end

Vision object count#

The Vision object count block returns the amount of detected objects in the dataset.

The Set AI Vision object item stack block.#
  (Vision object count)

Parameters

Description

This block has no parameters.

Example

  when started :: hat events
  [Display how many AprilTags are detected.]
  forever
  get [all AprilTags v] data from Vision [forward v] camera
  clear console
  print (Vision object count) on console ▶
  wait [0.2] seconds

Vision object property#

There are multiple properties that are included with each object (shown below) stored after the get object data block is used.

The AI Vision object property reporter block.#
  (Vision object [width v])

Some property values are based off of the detected object’s position in the Vision Sensor’s view at the time that the get object data block was used. Each Vision Sensor has a different resolution:

  • Forward-facing: 640 x 480 pixels

  • Downward-facing: 640 x 400 pixels

Parameter

Description

property

Which property of the detected object to use:

width#

width returns the width of the detected object in pixels as an integer from 1 to 640.

The AI Vision object property stack block with its parameter set to width.#
  (Vision object [width v])

Example

  when started :: hat events
  [Show if the AprilTag ID appears large or small.]
  forever 
  get [all AprilTags v] data from Vision [forward v] camera
  clear console
  if <Vision object exists?> then
  if <(Vision object [width v]) [math_greater_than v] [100]> then
  print [Large] on console ▶
  else
  print [Small] on console ▶
  end 
  end
  wait [0.2] seconds

height#

height returns the height of the detected object in pixels as an integer from 1 to 480.

The AI Vision object property stack block with its parameter set to height.#
  (Vision object [height v])

Example

  when started :: hat events
  [Show if the AprilTag ID appears large or small.]
  forever
  get [all AprilTags v] data from Vision [forward v] camera
  clear console
  if <Vision object exists?> then
  if <(Vision object [height v]) [math_greater_than v] [100]> then
  print [Large] on console ▶
  else
  print [Small] on console ▶
  end
  end
  wait [0.2] seconds

centerX#

centerX returns the x coordinate of the center of the detected object in pixels as an integer from 1 to 640.

The AI Vision object property stack block with its parameter set to centerX.#
  (Vision object [centerX v])

Example

  when started :: hat events
  [Show if an AprilTag is to the left or the right of the camera.]
  forever
  get [all AprilTags v] data from Vision [forward v] camera
  clear console
  if <Vision object exists?> then
  if <(Vision object [centerX v]) [math_less_than v] [320]> then
  print [To the left] on console ▶
  else
  print [To the right] on console ▶
  end
  end
  wait [0.2] seconds

centerY#

centerY returns the y coordinate of the center of the detected object in pixels as an integer from 1 to 480.

The AI Vision object property stack block with its parameter set to centerY.#
  (Vision object [centerY v])

Example

  when started :: hat events
  [Take off and attempt to align the drone with an AprilTag ID on the wall.]
  [Adjust the takeoff height to observe different results.]
  take off to [800] [mm v] ▶
  get [all AprilTags v] data from Vision [forward v] camera
  if <Vision object exists?> then
  if <(Vision object [centerY v]) [math_less_than v] [240]> then
  print [Below the target!] on console ▶
  else
  print [Above the target!] on console ▶
  end
  else
  print [No target found!] on console ▶

bearing#

bearing returns an angle indicating an object’s position relative to the drone. The behavior depends on which Vision Sensor is used.

A value of 0° indicates the center of the object and sensor are aligned. Positive values mean the object is to the right, while negative values mean the object is to the left.

Sensor

Returns

Downward

Degrees the front of the drone must turn to align with the center of the object, from –180º to 180º.

Forward

Degrees the object is offset to the left or right of the Vision Sensor’s center from –180º to 180º.

The AI Vision object property stack block with its parameter set to centerY.#
  (Vision object [bearing v])

Examples

  when started :: hat events
  [Align the front of the drone to an AprilTag ID.]
  set turn velocity to (15) %
  take off to [500] [mm v] ▶
  forever
  get [all AprilTags v] data from Vision [downward v] camera
  if <Vision object exists?> then
  if <[-20] [math_less_than v] (Vision object [bearing v]) [math_less_than v] [20]> then
  [Hover once sensor is centered on the AprilTag ID.]
  hover
  play sound [success v] ▶
  else if <(Vision object [bearing v]) [math_less_than v] [-21]> then
  turn [left v]
  else
  turn [right v]

rotation#

rotation returns the orientation of the detected AprilTag as an integer in degrees from 1 to 360.

The AI Vision object property stack block with its parameter set to rotation.#
  (Vision object [rotation v])

Example

  when started :: hat events
  [Turn left or right based on the rotation of an AprilTag ID.]
  take off to [1000] [mm v] ▶
  forever
  get [all AprilTags v] data from Vision [forward v] camera
  if <Vision object exists?> then
  if <[240] [math_less_than v] (Vision object [rotation v]) [math_less_than v] [300]> then
  turn [left v] for [90] degrees ▶
  else if <[60] [math_less_than v] (Vision object [rotation v]) [math_less_than v] [120]> then
  turn [right v] for [90] degrees ▶
  else
  hover

originX#

originX returns the x-coordinate of the top-left corner of the detected object’s bounding box in pixels as an integer from 1 to 640.

The AI Vision object property stack block with its parameter set to originX.#
  (Vision object [originX v])

Example

  when started :: hat events
  [Draw lines to the AprilTag origin from all corners.]
  forever
  clear screen
  get [all AprilTags v] data from vision [forward v] camera
  if <Vision object exists?> then
  draw line [0] [0] (Vision object [originX v]) (Vision object [originY v]) on screen
  draw line [0] [480] (Vision object [originX v]) (Vision object [originY v]) on screen
  draw line [640] [0] (Vision object [originX v]) (Vision object [originY v]) on screen
  draw line [640] [480] (Vision object [originX v]) (Vision object [originY v]) on screen
  end
  wait [0.2] seconds

originY#

originY returns the y-coordinate of the top-left corner of the detected object’s bounding box in pixels as an integer from 1 to 480.

The AI Vision object property stack block with its parameter set to originY.#
  (Vision object [originY v])

Example

  when started :: hat events
  [Draw lines to the AprilTag origin from all corners.]
  forever
  clear screen
  get [all AprilTags v] data from vision [forward v] camera
  if <Vision object exists?> then
  draw line [0] [0] (Vision object [originX v]) (Vision object [originY v]) on screen
  draw line [0] [480] (Vision object [originX v]) (Vision object [originY v]) on screen
  draw line [640] [0] (Vision object [originX v]) (Vision object [originY v]) on screen
  draw line [640] [480] (Vision object [originX v]) (Vision object [originY v]) on screen
  end
  wait [0.2] seconds

id#

id returns the identification number of the detected AprilTag ID as an integer.

The AI Vision object property stack block with its parameter set to tagID.#
  (Vision object [id v])

Example

  when started :: hat events
  [Display the detected AprilTag ID.]
  forever
  get [all AprilTags v] data from Vision [forward v] camera
  clear console
  if <Vision object exists?> then
  print (Vision object [id v]) on console ▶
  end
  wait [0.2] seconds