Vision Sensor#

Introduction#

The Vision Sensor enable robots to detect and track visual information in their environment. By identifying colors and patterns, the Vision Sensor allows robots to analyze their surroundings and respond to what they see.

This page uses Vision1 as the example Vision Sensor name. Color Signature objects (such as BLUEBOX) and Color Code objects (such as BOXCODE) are also used in examples. Replace these names with your own configured names as needed.

Below is a list of all methods:

Getters — Get data from the Vision Sensor.

  • takeSnapshot — Captures data for a specific Color Signature or Color Code.

  • installed — Whether the Vision Sensor is connected to the V5 Brain.

Properties — Object data returned from takeSnapshot.

  • largestObject — Immediately select the largest object from the snapshot.

  • objectCount — Returns the number of detected objects as an integer.

  • objects — Returns an array containing the properties of detected objects.

  • .exists — Whether the object exists in the current detection as a Boolean.

  • .width — Width of the detected object in pixels.

  • .height — Height of the detected object in pixels.

  • .centerX — x position of the object’s center in pixels.

  • .centerY — y position of the object’s center in pixels.

  • .originX — x position of the object’s top-left corner in pixels.

  • .originY — y position of the object’s top-left corner in pixels.

  • .angle — Orientation of the Color Code in degrees.

Constructors — Manually initialize and configure the sensors.

Getters#

takeSnapshot#

takeSnapshot captures an image from the Vision Sensor, processes it based on the signature, and updates the objects array. This method can also limit the amount of objects captured in the snapshot.

Color Signatures and Color Codes must be configured first in the Vision Utility before they can be used with this method.

The objects array stores objects ordered from largest to smallest by width, starting at index 0. Each object’s properties can be accessed using its index. objects is an empty array if no matching objects are detected.

Default Usage:
Vision1.takeSnapshot(signature)

Overload Usages:
Vision1.takeSnapshot(signature, count)

Parameters

Description

signature

The signature to retrieve data for. The name of the Vision Sensor, two underscores, and then the Color Signature’s or Color Code’s name. For example: Vision1__BLUEBOX.

count

Optional. The number of objects to return as a uint32_t from 1 to 24 (default is 8) where the largest objects will be included.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();

  while (true) {
    // Display if a blue object is detected
    Vision1.takeSnapshot(Vision1__BLUEBOX);

    Brain.Screen.clearScreen();
    Brain.Screen.setCursor(1, 1);

    if (Vision1.objects[0].exists) {
      Brain.Screen.print("Blue detected");
    } 
    else {
      Brain.Screen.print("No blue");
    }

    wait(0.5, seconds);
  }
}

Color Signatures#

A Color Signature is a unique color that the Vision Sensor can recognize. These signatures allow the sensor to detect and track objects based on their color. Once a Color Signature is configured, the sensor can identify objects with that specific color in its field of view. Color signatures are used with takeSnapshot to process and detect colored objects in real-time.

To use a configured Color Signature in a project, its name must be passed as a string in the format: the Vision Sensor’s name, followed by two underscores, and then the Color Signature’s name. For example: Vision1__BLUEBOX.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();

  // Display when a blue object is detected
  while (true) {
    Brain.Screen.setCursor(1, 1);
    Brain.Screen.clearLine(1);

    Vision1.takeSnapshot(Vision1__BLUEBOX);

    if (Vision1.objects[0].exists) {
      Brain.Screen.print("Color detected!");
    }

    wait(100, msec); 
  }
}

Color Codes#

A Color Code is a structured pattern made up of Color Signatures arranged in a specific order. These codes allow the Vision Sensor to recognize predefined patterns of colors. Color Codes are useful for identifying complex objects or creating unique markers for autonomous navigation.

To use a configured Color Code in a project, its name must be passed as a string in the format: the Vision Sensor’s name, followed by two underscores, and then the Color Code’s name. For example: Vision1__BOXCODE.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();

  // Display when BOXCODE is detected
  while (true) {
    Brain.Screen.setCursor(1, 1);
    Brain.Screen.clearLine(1);

    Vision1.takeSnapshot(Vision1__BOXCODE);

    if (Vision1.objects[0].exists) {
      Brain.Screen.print("Code detected!");
    }
  }
}

installed#

installed returns an integer indicating whether the Vision Sensor is currently connected to the V5 Brain.

  • 1 — The Vision Sensor is connected to the V5 Brain.

  • 0 — The Vision Sensor is not connected to the V5 Brain.

Usage:
Vision1.installed()

Parameters

Description

This method has no parameters.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();

  // Display a message if the Vision Sensor is connected
  if (Vision1.installed()) {
    Brain.Screen.print("Connected");
  }
}

Properties#

There are eight properties that are included with each object stored in the objects array after takeSnapshot is used.

Some property values are based off of the detected object’s position in the Vision Sensor’s view at the time that takeSnapshot was used. The Vision Sensor has a resolution of 316 by 212 pixels.

largestObject#

largestObject retrieves the largest detected object from the objects array.

This method can be used to always get the largest object from objects without specifying an index.

Default Usage:
Vision1.largestObject

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();

  // Display the largest detected object's width
  while (true) {
    Vision1.takeSnapshot(Vision1__BLUEBOX);

    Brain.Screen.clearScreen();
    Brain.Screen.setCursor(1, 1);

    if (Vision1.objects[0].exists) {
      Brain.Screen.print("%d", Vision1.largestObject.width);
    } 
    wait(0.5, seconds);
  }
}

objectCount#

objectCount returns the number of items inside the objects array as an integer.

Default Usage:
Vision1.objectCount

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();

  // Display how many blue objects are detected
  while (true) {
    Vision1.takeSnapshot(Vision1__BLUEBOX);

    Brain.Screen.clearScreen();
    Brain.Screen.setCursor(1, 1);

    Brain.Screen.print("Object Count: %d", Vision1.objectCount);

    wait(0.5, seconds);
  }
}

objects#

objects returns an array of detected object properties. Use the array to access specific property values of individual objects.

Default Usage:
Vision1.objects

.exists#

.exists returns an integer indicating if the index exists in the objects array or not.

  • 1: The index exists.

  • 0: The index does not exist.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();

  // Display when a blue object is detected
  while (true) {
    Brain.Screen.setCursor(1, 1);
    Brain.Screen.clearLine(1);

    Vision1.takeSnapshot(Vision1__BLUEBOX);

    if (Vision1.objects[0].exists) {
      Brain.Screen.print("Color detected!");
    }

    wait(100, msec); 
  }
}

.width#

.width returns the width of the detected object in pixels, which is an integer between 1 and 316.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();
  // Approach an object until it's at least 100 pixels wide
  while (true) {
    Vision1.takeSnapshot(Vision1__BLUEBOX);
    if (Vision1.objects[0].exists && Vision1.objects[0].width < 100) {
      Drivetrain.driveFor(forward, 10, mm);
    } 
    else {
      Drivetrain.stop();
    }
    wait(0.5, seconds);
  }
}

.height#

.height returns the height of the detected object in pixels, which is an integer between 1 and 212.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();
  // Approach an object until it's at least 100 pixels tall
  while (true) {
    Vision1.takeSnapshot(Vision1__BLUEBOX);

    if (Vision1.objects[0].exists && Vision1.objects[0].height < 100) {
      Drivetrain.driveFor(forward, 10, mm);
    } 
    else {
      Drivetrain.stop();
    }
    wait(0.5, seconds);  // Avoid over-processing
  }
}

.centerX#

.centerX returns the x-coordinate of the detected object’s center in pixels, which is an integer between 0 and 316.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();
  // Turn until an object is directly in front of the sensor
  Drivetrain.setTurnVelocity(10, percent);
  Drivetrain.turn(right);
  while (true) {
    Vision1.takeSnapshot(Vision1__BLUEBOX);
    if (Vision1.objects[0].exists) {
      int centerX = Vision1.largestObject.centerX;
      if (140 < centerX && centerX < 180) {
        Drivetrain.stop();
      }
    }
    wait(0.5, seconds);
  }
}

.centerY#

.centerY returns the y-coordinate of the detected object’s center in pixels, which is an integer between 0 and 212.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();
  // Approach an object until it's at least 90 pixels tall
  while (true) {
    Vision1.takeSnapshot(Vision1__BLUEBOX);

    if (Vision1.objects[0].exists) {
      if (Vision1.objects[0].centerY < 90) {
        Drivetrain.drive(forward);
      } else {
        Drivetrain.stop();
      }
    } else {
      Drivetrain.stop();
    }
    wait(50, msec);
  }
}

.originX#

.originX returns the x-coordinate of the top-left corner of the detected object’s bounding box in pixels, which is an integer between 0 and 316.

#include "vex.h"

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();
  // Display if an object is to the left or the right
  while (true) {
    Brain.Screen.clearScreen();
    Brain.Screen.setCursor(1, 1);
    Vision1.takeSnapshot(Vision1__BLUEBOX);
    if (Vision1.objects[0].exists) {
      if (Vision1.objects[0].originX < 160) {
        Brain.Screen.print("To the left!");
      }
      else {
        Brain.Screen.print("To the right!");
      }
    }
    wait(0.5, seconds);  // Short delay to reduce flicker
  }
}

.originY#

.originY returns the y-coordinate of the top-left corner of the detected object’s bounding box in pixels, which is an integer between 0 and 212.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();
  // Display if an object is close or far
  while (true) {
    Brain.Screen.clearScreen();
    Brain.Screen.setCursor(1, 1);
    Vision1.takeSnapshot(Vision1__BLUEBOX);
    if (Vision1.objects[0].exists) {
      if (Vision1.objects[0].originY < 80) {
        Brain.Screen.print("Far");
      } else {
        Brain.Screen.print("Close");
      }
    }
    wait(0.5, seconds);
  }
}

.angle#

.angle returns the orientation of the detected object in degrees, which is a double between 0 and 360.

int main() {
  // Initializing Robot Configuration. DO NOT REMOVE!
  vexcodeInit();
  // Turn left or right depending on how a configured
  // Color Code is rotated
  while (true) {
    Vision1.takeSnapshot(Vision1__BOXCODE);
    if (Vision1.objects[0].exists) {
      int angle = Vision1.objects[0].angle;
      if (70 < angle && angle < 110) {
        Drivetrain.turnFor(right, 45, degrees);
      }
      else if (250 < angle && angle < 290) {
        Drivetrain.turnFor(left, 45, degrees);
      }
      else {
        Drivetrain.stop();
      }
    }
    wait(0.5, seconds);
  }
}

Constructors#

Constructors are used to manually create vision, signature, and code objects, which are necessary for configuring the sensors outside of VEXcode. If fewer arguments are provided, default arguments or function overloading should be used in the constructor definition.

vision#

vision creates a Vision Sensor.

Usage:
vision (port, brightness, sigs)

Parameters

Description

port

The Smart Port that the Vision Sensor is connected to, written as PORTx where x is the number of the port.

brightness

The brightness value for the Vision Sensor, from 0 to 255.

sigs

One or more Color Signature or Color Code objects, named using the format VisionSensor__ObjectName (for example, Vision1__BLUEBOX), separated by commas.

// Create the Color Signatures
vision::signature Vision1__REDBOX = vision::signature(1, 10121, 10757, 10439, -1657, -1223, -1440, 2.5, 1);
vision::signature Vision1__BLUEBOX = vision::signature(2, -4479, -3277, -3878, 5869, 7509, 6689, 2.5, 1);

// Create a Color Code
vision::code Vision1__BOXCODE = vision::code(Vision1__REDBOX, Vision1__BLUEBOX);

/*
Create a Vision Sensor with the following values:
port: Port 1
brightness: 80
sigs: REDBOX and BOXCODE
*/
vision Vision1 = vision (PORT1, 50, Vision1__REDBOX, Vision1__BOXCODE);

vision::signature#

vision::signature creates a Color Signature. Up to seven different Color Signatures can be stored on a Vision Sensor at once.

Default Usage:
vision::signature(index, uMin, uMax, uMean, vMin, vMax, vMean, rgb, type)

Parameter

Description

index

The signature object’s index, from 1 - 7. Note: Creating two signature objects with the same index number will cause the second created object to override the first.

uMin

The value from uMin in the Vision Utility.

uMax

The value from uMax in the Vision Utility.

uMean

The value from uMean in the Vision Utility.

vMin

The value from vMin in the Vision Utility.

vMax

The value from vMax in the Vision Utility.

vMean

The value from vMean in the Vision Utility.

rgb

The value from rgb in the Vision Utility.

type

The value from type in the Vision Utility.

In order to obtain the values to create a Color Signature, go to the Vision Utility. Once a Color Signature is configured, copy the parameter values from the Configuration window.

/*
Create a Color Signature with the following values:
index: 2
uMin: -4479
uMax: -3277
uMean: 3878
vMin: 5869
vMax: 7509
vMean: 6689
rgb: 2.5
type: 1
*/

vision::signature Vision1__BLUEBOX = vision::signature (2, -4479, -3277, -3878, 5869, 7509, 6689, 2.5, 1);

vision::code#

vision::code creates a Color Code. It requires at least two already defined Color Signatures in order to be used. Up to eight different Color Codes can be stored on a Vision Sensor at once.

Default Usage:
vision::code(sigs)

Parameters

Description

sigs

Two or more previously created Color Signature objects, named using the format VisionSensor__colorSignature (for example, Vision1__BLUEBOX), separated by commas. A Color Code can include up to five different Color Signatures.

// Create the Color Signatures
vision::signature Vision1__REDBOX = vision::signature(1, 10121, 10757, 10439, -1657, -1223, -1440, 2.5, 1);
vision::signature Vision1__BLUEBOX = vision::signature(2, -4479, -3277, -3878, 5869, 7509, 6689, 2.5, 1);

// Create a Color Code with the Color Signatures
vision::code Vision1__BOXCODE = vision::code(Vision1__REDBOX, Vision1__BLUEBOX);