vision#

The Vision Sensor for VEX IQ detects and tracks Color Signatures and Color Codes. This allows the Vision Sensor to analyze its surroundings and react based on detected visual data. Below is a list of all methods:

Methods – Get data from the Vision Sensor.

  • takeSnapshot – Captures data for a specific Color Signature or Color Code.

  • largestObject – Immediately select the largest object from the snapshot.

  • objectCount – Returns the number of detected objects as an integer.

  • objects – Returns an array containing the properties of detected objects.

  • installed – Whether the Vision Sensor is connected to the IQ Brain.

Properties – Object data returned from takeSnapshot.

  • .exists – Whether the object exists in the current detection as a Boolean.

  • .width – Width of the detected object in pixels.

  • .height – Height of the detected object in pixels.

  • .centerX – X position of the object’s center in pixels.

  • .centerY – Y position of the object’s center in pixels.

  • .angle – Orientation of the Color Code in degrees.

  • .originX – X position of the object’s top-left corner in pixels.

  • .originY – Y position of the object’s top-left corner in pixels.

Constructors – Manually initialize and configure the Vision Sensor.

In VEXcode, the initialization of the Vision Sensor and its configured Color Signatures and Color Codes is done automatically. For the examples below, the configured Vision Sensor will be named Vision1. To manually initialize and construct a Vision Sensor and its Color Signatures and Color Codes, refer to the Constructors section on this page.

Methods#

takeSnapshot#

takeSnapshot captures an image from the Vision Sensor, processes it based on the configured Color Signatures and Color Codes, and updates the objects array. This method can also limit the amount of objects captured in the snapshot.

Color Signatures and Color Codes must be configured first in the Vision Utility before they can be used with this method.

The objects array stores objects ordered from largest to smallest by width, starting at index 0. Each object’s properties can be accessed using its index. objects is an empty array if no matching objects are detected.

Default Usage:
Vision1.takeSnapshot(signature)

Parameter

Description

signature

What signature or code object to get data of. Its name must be passed as a string in the format: the Vision Sensor’s name, followed by two underscores, and then the object’s name. For example: Vision1__REDBOX or Vision1__BOXCODE.

while (true) {
  // Take a snapshot to check for detected objects.
  Vision1.takeSnapshot(Vision1__BLUEBOX);

  // Clear the screen/reset so that we can display
  // new information.
  Brain.Screen.clearScreen();
  Brain.Screen.setCursor(1, 1);

  // If objects were found, print the location.
  if (Vision1.objects[0].exists) {
    Brain.Screen.print("Center X: %d", Vision1.largestObject.centerX);
  } 
  else {
    Brain.Screen.print("no object");
  }

  wait(0.5, seconds);
}

Overload

  • Vision1.takeSnapshot(signature, count)

Overload Parameter

Description

count

The number of objects to return as a uint32_t where the largest objects will be included.

Overload Examples

// Display a location if a blue box is detected
while (true) {
  // Take a snapshot of only one object
  Vision1.takeSnapshot(Vision1__BLUEBOX, 1);

  // Clear the screen/reset so that we can display
  // new information.
  Brain.Screen.clearScreen();
  Brain.Screen.setCursor(1, 1);

  // If object was found, print the location.
  if (Vision1.objects[0].exists) {
    Brain.Screen.print("Center X: %d", Vision1.largestObject.centerX);
  } 
  else {
    Brain.Screen.print("no object");
  }

  wait(0.5, seconds);
}

Color Signatures#

A Color Signature is a unique color that the Vision Sensor can recognize. These signatures allow the sensor to detect and track objects based on their color. Once a Color Signature is configured, the sensor can identify objects with that specific color in its field of view. Color signatures are used with take_snapshot to process and detect colored objects in real-time.

To use a configured Color Signature in a project, its name must be passed as a string in the format: the Vision Sensor’s name, followed by two underscores, and then the Color Signature’s name. For example: vision_1__REDBOX.

//Display if any objects match the REDBOX signature
while (true) {
  // Take a snapshot to check for detected objects.
  Brain.Screen.setCursor(1, 1);
  Brain.Screen.clearScreen();
  // Change to any configured Color Signature
  Vision1.takeSnapshot(Vision1__REDBOX);
  if (Vision1.objects[0].exists){
    Brain.Screen.print("Color signature");
    Brain.Screen.newLine();
    Brain.Screen.print("detected!");
    wait(0.1,seconds);
  }
}

Color Codes#

A Color Code is a structured pattern made up of color signatures arranged in a specific order. These codes allow the Vision Sensor to recognize predefined patterns of colors. Color codes are useful for identifying complex objects or creating unique markers for autonomous navigation.

To use a configured Color Code in a project, its name must be passed as a string in the format: the Vision Sensor’s name, followed by two underscores, and then the Color Code’s name. For example: vision_1__BOXCODE.

// Display if any objects match the BOXCODE code
while (true) {
  // Take a snapshot to check for detected objects.
  Brain.Screen.setCursor(1, 1);
  Brain.Screen.clearScreen();
  // Change to any configured Color Code
  Vision1.takeSnapshot(Vision1__BOXCODE);
  if (Vision1.objects[0].exists){
    Brain.Screen.print("Color Code");
    Brain.Screen.newLine();
    Brain.Screen.print("detected!");
    wait(0.1,seconds);
  }
}

largestObject#

largestObject retrieves the largest detected object from the objects array.

This method can be used to always get the largest object from objects without specifying an index.

Default Usage:
Vision1.largestObject

while (true){
  // Take a snapshot to check for detected objects.
  Vision1.takeSnapshot(Vision1__BLUEBOX);
  // Clear the screen/reset so that we can display
  // new information.
  Brain.Screen.clearScreen();
  Brain.Screen.setCursor(1, 1);
  // If objects were found, print the location 
  // of largest.
  if (Vision1.objects[0].exists) {
    Brain.Screen.print("Center X: %d", Vision1.largestObject.centerX);
  } 
  else {
    Brain.Screen.print("no object");
  }
  wait(0.5, seconds);
}

objectCount#

objectCount returns the number of items inside the objects array as an integer.

Default Usage:
Vision1.objectCount

while (true) {
  // Take a snapshot to check for detected objects.
  Vision1.takeSnapshot(Vision1__BLUEBOX);

  // Clear the screen/reset so that we can display
  // new information.
  Brain.Screen.clearScreen();
  Brain.Screen.setCursor(1, 1);

  // Print how many objects were detected.
  Brain.Screen.print("object count: %d", Vision1.objectCount);

  wait(0.5, seconds);
}

installed#

installed returns an integer indicating whether the Vision Sensor is currently connected to the IQ Brain.

  • 1 – The Vision Sensor is connected to the IQ Brain.

  • 0 – The Vision Sensor is not connected to the IQ Brain.

Parameters

Description

This method has no parameters.

// Display a message if the Vision Sensor is detected
if (Vision1.installed()){
  Brain.Screen.print("Vision Sensor");
  Brain.Screen.newLine();
  Brain.Screen.print("Installed!");
}

objects#

objects returns an array of detected object properties. Use the array to access specific property values of individual objects.

Default Usage:
Vision1.objects

Properties#

There are eight properties that are included with each object stored in the objects array after takeSnapshot is used.

Some property values are based off of the detected object’s position in the Vision Sensor’s view at the time that takeSnapshot was used. The Vision Sensor has a resolution of 316 by 212 pixels.

.exists#

.exists returns an integer indicating if the index exists in the objects array or not.

  • 1: The index exists.

  • 0: The index does not exist.

// Check if at least one object is detected
while (true) {
  // Take a snapshot to check for detected objects.
  Vision1.takeSnapshot(Vision1__BLUEBOX);

  // Clear the screen/reset so that we can display
  // new information.
  Brain.Screen.clearScreen();
  Brain.Screen.setCursor(1, 1);

  // If an object exists, print its location.
  if (Vision1.objects[0].exists) {
    Brain.Screen.print("Center X: %d", Vision1.objects[0].centerX);
  } 
  else {
    Brain.Screen.print("no objects");
  }

  wait(0.5, seconds);
}

.width#

.width returns the width of the detected object in pixels, which is an integer between 1 and 316.

// Move towards a blue box until its width is
// larger than 100 pixels
while (true){
  Vision1.takeSnapshot(Vision1__BLUEBOX);

  if (Vision1.objects[0].width < 100) {
    Drivetrain.driveFor(forward, 10, mm);
  } 
  else {
    Drivetrain.stop();
  }

  wait(0.5, seconds);
}

.height#

.height returns the height of the detected object in pixels, which is an integer between 1 and 212.

// Move towards a blue box until its height is
// larger than 100 pixels
while (true){
  Vision1.takeSnapshot(Vision1__BLUEBOX);

  if (Vision1.objects[0].height < 100) {
    Drivetrain.driveFor(forward, 10, mm);
  } 
  else {
    Drivetrain.stop();
  }

  wait(0.5, seconds);
}

.centerX#

.centerX returns the x-coordinate of the detected object’s center in pixels, which is an integer between 0 and 316.

// Turn slowly until a blue box is centered in
// front of the robot
Drivetrain.setTurnVelocity(10,percent);
Drivetrain.turn(right);

while (true){
  Vision1.takeSnapshot(Vision1__BLUEBOX);

  if (Vision1.objects[0].exists){
    if (140 < Vision1.largestObject.centerX && Vision1.largestObject.centerX < 180){
      Drivetrain.stop();
    }
  }
  wait(0.5,seconds);
}

.centerY#

.centerY returns the y-coordinate of the detected object’s center in pixels, which is an integer between 0 and 212.

// Move towards a blue object until its
// center y-coordinate is more than 140 pixels
while (true){
  Vision1.takeSnapshot(Vision1__BLUEBOX);

  if (Vision1.objects[0].exists){
    if (Vision1.largestObject.centerY < 140){
        Drivetrain.drive(forward);
    }
  }
  else{
    Drivetrain.stop();
  }
  wait(0.5,seconds);
}

.angle#

.angle returns the orientation of the detected object in degrees, which is a double between 0 and 316.

// Turn right or left depending on how the
// configured box code is rotated.
while (true){
  Vision1.takeSnapshot(Vision1__BOXCODE);

  if (Vision1.objects[0].exists){
    if (70 < Vision1.objects[0].angle && Vision1.objects[0].angle < 110){
      Drivetrain.turnFor(right, 45, degrees);
    }
    else if (250 < Vision1.objects[0].angle && Vision1.objects[0].angle < 290){
      Drivetrain.turnFor(left, 45, degrees);
    }
    else{
      Drivetrain.stop();
    }
  }
  wait(0.5,seconds);
}

.originX#

.originX returns the x-coordinate of the top-left corner of the detected object’s bounding box in pixels, which is an integer between 0 and 316.

// Display if a red box is to the
// left or the right
while (true){
  Brain.Screen.clearScreen();
  Brain.Screen.setCursor(1,1);
    
  Vision1.takeSnapshot(Vision1__REDBOX);
    
  if (Vision1.objects[0].exists){
    if (Vision1.objects[0].originX < 160){
      Brain.Screen.print("To the left!");
    }
    else{ 
      Brain.Screen.print("To the right!");
    }
  }
  wait(0.5,seconds);
}

.originY#

.originY returns the y-coordinate of the top-left corner of the detected object’s bounding box in pixels, which is an integer between 0 and 212.

// Display if a red box is close or far
// from the robot.
while (true){
  Brain.Screen.clearScreen();
  Brain.Screen.setCursor(1,1);
    
  Vision1.takeSnapshot(Vision1__REDBOX);

  if (Vision1.objects[0].exists){
    if (Vision1.objects[0].originY < 80){
      Brain.Screen.print("Far");
    }
    else{
      Brain.Screen.print("Close");
    }
  }
  wait(0.5,seconds);
}

Constructors#

Constructors are used to manually create vision, signature, and code objects, which are necessary for configuring the Vision Sensor outside of VEXcode. If fewer arguments are provided, default arguments or function overloading should be used in the constructor definition.

For the examples below, the configured Vision Sensor will be named Vision1, and the configured Color Signature objects, such as Vision1__BLUEBOX, will be used in all subsequent examples throughout this API documentation when referring to vision class methods.

Vision Sensor#

vision creates a Vision Sensor and configures the brightness level and signatures to be used with the sensor.

Default Usage: vision( int32_t index, uint8_t bright, Args &... sigs )

Default Parameters

Description

port

A valid Smart Port that the Vision Sensor is connected to.

brightness

The brightness value for the Vision Sensor, from 10 to 150.

sigs

The name of one or more previously created Color Signature or Color Code objects.

// Construct a vision object Vision1 with 1 color
// Vision1__REDBOX.
vision::signature Vision1__REDBOX = vision::signature (1, 10121, 10757, 10439,-1657, -1223, -1440,2.5, 1);
vision Vision1 = vision (PORT1, 50, Vision1__REDBOX);

while (true) {
  // Take a snapshot to check for detected objects.
  Vision1.takeSnapshot(Vision1__REDBOX);

  // Clear the screen/reset so that we can display
  // new information.
  Brain.Screen.clearScreen();
  Brain.Screen.setCursor(1, 1);

  // If objects were found, print the location.
  if (Vision1.objects[0].exists) {
    Brain.Screen.print("Center X: %d", Vision1.largestObject.centerX);
  } else {
    Brain.Screen.print("no object");
  }

  wait(0.5, seconds);
}

Color Signature#

signature creates a Color Signature. Up to seven different Color Signatures can be stored on a Vision Sensor at once.

Default Usage:

signature(index, uMin, uMax, uMean, vMin, vMax, vMean, rgb, type)

Parameter

Description

index

The signature object’s index, from 1 - 7. Note: Creating two signature objects with the same index number will cause the second created object to override the first.

uMin

The value from uMin in the Vision Utility.

uMax

The value from uMax in the Vision Utility.

uMean

The value from uMean in the Vision Utility.

vMin

The value from vMin in the Vision Utility.

vMax

The value from vMax in the Vision Utility.

vMean

The value from vMean in the Vision Utility.

rgb

The value from rgb in the Vision Utility.

type

The value from type in the Vision Utility.

In order to obtain the values to create a Color Signature, go to the Vision Utility. Once a Color Signature is configured, copy the parameter values from the Configuration window.

// Construct a vision object Vision1 with two Color
// Signatures, Vision1__REDBOX and Vision1__BLUEBOX.
vision::signature Vision1__REDBOX = vision::signature (1, 10121, 10757, 10439,-1657, -1223, -1440,2.5, 1);
vision::signature Vision1__BLUEBOX = vision::signature (2, -4479, -3277, -3878,5869, 7509, 6689,2.5, 1);
vision Vision1 = vision (PORT1, 50, Vision1__REDBOX, Vision1__BLUEBOX);

while (true) {
  // Take a snapshot to check for detected objects.
  Vision1.takeSnapshot(Vision1__BLUEBOX);

  // Clear the screen/reset so that we can display
  // new information.
  Brain.Screen.clearScreen();
  Brain.Screen.setCursor(1, 1);

  // If objects were found, print the location.
  if (Vision1.objects[0].exists) {
    Brain.Screen.print("Center X: %d", Vision1.largestObject.centerX);
  } else {
    Brain.Screen.print("no object");
  }

  wait(0.5, seconds);
}

Color Code#

Code creates a Color Code. It requires at least two already defined Color Signatures in order to be used. Up to eight different Color Codes can be stored on a Vision Sensor at once.

Default Usage:

code(sig1, sig2)

Parameters

Description

sig1

A previously created signature object, or the int32_t index of a previously created signature object.

sig2

A previously created signature object, or the int32_t index of a previously created signature object.

// Construct a vision object Vision1 with two Color
// Signatures, Vision1__REDBOX and Vision1__BLUEBOX,
// Alongside a Color Code for a red box to the left of
// a blue box, Vision1__BOXCODE.
vision::signature Vision1__REDBOX = vision::signature (1, 10121, 10757, 10439,-1657, -1223, -1440,2.5, 1);
vision::signature Vision1__BLUEBOX = vision::signature (2, -4479, -3277, -3878,5869, 7509, 6689,2.5, 1);
vision::code Vision1__BOXCODE = vision::code (Vision1__REDBOX, Vision1__BLUEBOX);
vision Vision1 = vision (PORT1, 50, Vision1__REDBOX, Vision1__BLUEBOX);

// Turn right or left depending on how the
// configured box code is rotated.
while (true){
  Vision1.takeSnapshot(Vision1__BOXCODE);

  if (Vision1.objects[0].exists){
    if (70 < Vision1.objects[0].angle && Vision1.objects[0].angle < 110){
      Drivetrain.turnFor(right, 45, degrees);
    }
    else if (250 < Vision1.objects[0].angle && Vision1.objects[0].angle < 290){
      Drivetrain.turnFor(left, 45, degrees);
    }
    else{
      Drivetrain.stop();
    }
  }
  wait(0.5,seconds);
}

Overloads

  • code(sig1, sig2, sig3)

  • code(sig1, sig2, sig3, sig4)

  • code(sig1, sig2, sig3, sig4, sig5)

Overload Parameters

Description

sig3

A previously created signature object, or the int32_t index of a previously created signature object.

sig4

A previously created signature object, or the int32_t index of a previously created signature object.

sig5

A previously created signature object, or the int32_t index of a previously created signature object.

// Construct a vision object Vision1 with two Color
// Signatures, Vision1__REDBOX and Vision1__BLUEBOX,
// Alongside a Color Code for a red box to the left of
// a blue box alternating for 5 boxes, Vision1__BOXCODE.
vision::signature Vision1__REDBOX = vision::signature (1, 10121, 10757, 10439,-1657, -1223, -1440,2.5, 1);
vision::signature Vision1__BLUEBOX = vision::signature (2, -4479, -3277, -3878,5869, 7509, 6689,2.5, 1);
vision::code Vision1__BOXCODE = vision::code (Vision1__REDBOX, Vision1__BLUEBOX, Vision1__REDBOX, Vision1__BLUEBOX, Vision1__REDBOX);
vision Vision1 = vision (PORT1, 50, Vision1__REDBOX, Vision1__BLUEBOX);

// Turn right or left depending on how the
// configured box code is rotated.
while (true){
  Vision1.takeSnapshot(Vision1__BOXCODE);

  if (Vision1.objects[0].exists){
    if (70 < Vision1.objects[0].angle && Vision1.objects[0].angle < 110){
      Drivetrain.turnFor(right, 45, degrees);
    }
    else if (250 < Vision1.objects[0].angle && Vision1.objects[0].angle < 290){
      Drivetrain.turnFor(left, 45, degrees);
    }
    else{
      Drivetrain.stop();
    }
  }
  wait(0.5,seconds);
}