Sensing#

Brain Sensing#

Reset Timer#

The Reset Timer block is used to reset the IQ Brain’s timer.

A notched light blue coding block with the text "reset timer" written in white. The block has a simple, rectangular shape with a notched edge on the left side.

The Brain’s timer begins at the beginning of each project. The reset timer block is used to reset the timer back to 0 seconds.

In this example, the Brain will print the current time after waiting 2 seconds before resetting its timer.

A sequence of notched coding blocks forming a program. The sequence starts with a yellow "when started" block, followed by an orange "wait 2 seconds" block. Below that are two purple "print timer in seconds on Brain" blocks, with a blue "reset timer" block in between. Both purple blocks have an additional instruction to "set cursor to next row" after printing the timer on the Brain.

Timer Value#

The Timer Value block is used to report the value of the IQ Brain’s timer in seconds.

This image shows a rounded block that contains the text "timer in seconds" in lowercase letters.

The timer starts at 0 seconds when the program starts, and reports the timer’s value as a decimal value.

In this example, the Brain will print the current time after waiting 2 seconds before resetting its timer.

The image shows a sequence of blocks used to create a program. The sequence starts with a yellow "when started" block, followed by an orange "wait 2 seconds" block. Below that, there are two purple blocks: the first prints the "timer in seconds" on the Brain and sets the cursor to the next row, the next block resets the timer, and the final block prints the "timer in seconds" again on the Brain and sets the cursor to the next row.

Cursor Column#

The Cursor Column block is used to report the column number of the IQ Brain’s screen cursor location.

The image shows a light blue rounded block with the text "cursor column" inside it.

The Cursor Column block will report a value from 1-80 and will start on column 1 at the start of a project.

In this example, the Brain will print the number of the column the cursor is currently on.

The image displays a sequence of blocks. The first block is yellow and labeled "when started." Connected to it is a purple block with the command "print cursor column on Brain," where "cursor column" is shown within a light blue rounded block. The block also includes an option to set the cursor to the next row.

Cursor Row#

The Cursor Row block is used to report the row number of the IQ Brain’s screen cursor location.

The image shows a light blue rounded block with the text "cursor row" inside it.

The Cursor Row block will report a value from 1-9 and will start on row 1 at the start of a project.

In this example, the Brain will print the number of the row the cursor is currently on.

The image shows a sequence of blocks. The first block is yellow and labeled "when started." Attached to it is a purple block with the command "print cursor row on Brain," where "cursor row" is displayed within a light blue rounded block. The block also includes an option to set the cursor to the next row.

Brain Button Pressed#

The Brain Button Pressed block is used to report if a button on the VEX IQ Brain is pressed.

The image shows a light blue, notched block in a visual coding environment. The block contains the text "Brain Up button pressed?" with a dropdown menu that currently displays "Up.

The Brain Button Pressed block reports True when the selected Brain button is pressed.

The Brain Button Pressed block reports False when the selected Brain button is not pressed.

Choose which Brain button to use on the IQ Brain.

The image shows a light blue, notched block in a visual coding environment with a dropdown menu. The block contains the text "Brain Up button pressed?" with the dropdown menu currently displaying "Up." The dropdown is open, showing three options: "Up," "Down," and "Check," with "Up" being selected. This block is typically used to check if a specific button on the Brain device has been pressed, with the user able to select which button to check from the dropdown menu.

In this example, the Brain will print a message on its screen when the first time the down Brain Button is pressed.

The image shows a visual coding block sequence. The top block is a yellow "when started" block, which is used to trigger the following actions when the program begins. Underneath it is a gray comment block with the text: "Don't print the message until the Down Brain button is pressed." Below this comment block, there is an orange block that says "wait until Brain Down button pressed?" The "Down" option is selected from a dropdown menu within the block. Finally, there is a purple block that reads "print Down Brain button pressed! on Brain," followed by a dropdown arrow and a right-facing triangle, indicating the completion of the sequence. This sequence waits for the Brain's Down button to be pressed before printing the specified message.

Battery Capacity#

The Battery Capacity block is used to report the charge level of the IQ Brain’s battery.

The image shows a light blue, rounded block with the text "battery capacity in %" written in white. This block likely represents a variable or a sensor value in a visual coding environment, displaying or returning the current battery capacity as a percentage.

The Battery Capacity block reports a range from 0% to 100%.

In this example, the Brain will print its current battery charge on the Brain’s screen.

The image shows a block-based coding sequence. The sequence starts with a yellow "when started" block, followed by a purple "print" block. Inside the print block, there's a light blue, rounded block labeled "battery capacity in %." The text is set to be printed on the "Brain" device, and the cursor will move to the next row after printing. This code snippet likely prints the battery capacity percentage on the screen of the "Brain" device when the program starts.

Controller Sensing#

Controller Pressed#

The Controller Pressed block is used to report if a button on the IQ Controller is pressed.

The image shows a hexagonal coding block with the text "Controller E Up pressed?" The phrase "E Up" is within a drop-down menu in the center of the block, indicated by a small downward arrow beside it. The block is filled with a light blue color, and the text is white. The block's diamond shape tapers to points on the left and right ends.

The Controller Pressed block reports True when the selected Controller button is pressed.

The Controller Pressed block reports False when the selected Controller button is not pressed.

Choose which Controller button to use.

The image shows a hexagonal coding block labeled "Controller E Up pressed?" with a drop-down menu in the middle. The drop-down is currently expanded, revealing several options: "E Up," "E Down," "F Up," "F Down," and "L Up." The option "E Up" is highlighted and has a checkmark next to it, indicating that it is the currently selected option. The block has a yellow outline, and the drop-down menu is shaded in blue.

In this example, the Brain will print a message on its screen the first time the R Up button on the controller is pressed.

The image shows a sequence of coding blocks arranged vertically, representing a simple program. At the top is a yellow rounded block labeled "when started." Below it is a gray rectangular block with the text "Don't print the message until the R Up button is pressed." Next is an orange hexagonal block that reads "wait until Controller R Up pressed?" with "R Up" displayed in a drop-down menu. The sequence ends with a purple rectangular block labeled "print R Up Button pressed. on Brain and set cursor to next row," where "Brain" is also in a drop-down menu. The program is designed to wait for the R Up button to be pressed before printing the message "R Up Button pressed.

Position of Controller#

The Position of Controller block is used to report the position of a joystick on the IQ Controller along an axis.

The image shows an elongated, rounded coding block labeled "Controller A position." The word "A" is displayed in a drop-down menu in the center of the block, indicated by a small downward arrow next to it. The block is filled with a light blue color, and the text is white. The block's shape is smooth and rounded at the ends.

The Position of Controller block reports a range from -100 to 100.

The Position of Controller block reports 0 when the joystick axis is centered.

Choose the joystick’s axis.

The image shows an elongated, rounded coding block labeled "Controller A position," with a drop-down menu in the center displaying the letter "A." The drop-down is currently expanded, revealing the options "A," "B," "C," and "D." The option "A" is highlighted with a checkmark next to it, indicating that it is the selected option. The block is outlined in yellow, filled with a light blue color, and the text is white. The drop-down menu is also shaded in blue. The shape of the block is smooth and rounded at the ends.

In this example, the Brain will print the C axis of the IQ Controller’s joysticks.

The image shows a sequence of coding blocks arranged horizontally. The first block is a yellow, rounded block labeled "when started." Attached to it is a purple rectangular block labeled "print Controller C position on Brain and set cursor to next row." The phrase "Controller C position" is contained within an elongated, rounded light blue block embedded in the purple block. The word "C" is displayed in a drop-down menu in the center of this light blue block, and the word "Brain" is also in a drop-down menu within the purple block. This sequence represents a program that prints the position of the "Controller C" on the "Brain" when the program starts, and then moves the cursor to the next row.

Controller Enable/Disable#

The Controller Enable/Disable block is used to enable or disable Controller configured actions from the Devices menu.

The image shows a notched coding block labeled "Controller Disable." The word "Disable" is displayed in a drop-down menu in the center of the block, indicated by a small downward arrow next to it. The block is filled with a light blue color, and the text is white. The notched shape includes slight indentations on the left and right ends.

Choose to either enable or disable the configured Controller actions. By default, the Controller is Enabled in every project.

The image shows a notched coding block labeled "Controller Disable," with a drop-down menu in the center displaying the word "Disable." The drop-down is currently expanded, revealing two options: "Disable" and "Enable." The option "Disable" is highlighted with a checkmark next to it, indicating that it is the selected option. The block is outlined in yellow, filled with a light blue color, and the text is white. The notched shape includes slight indentations on the left and right ends. The drop-down menu is also shaded in blue.

In this example, the Controller will be disabled at the start of the project and be re-enabled after the drivetrain has moved forward for 6 inches.

The image shows a sequence of blocks in a block-based coding environment. The sequence begins with a yellow block labeled "when started" and is followed by three light blue blocks. The first blue block is labeled "Controller Disable," the second is a darker blue block labeled "drive forward for 6 inches," and the third blue block is labeled "Controller Enable." The blocks are slightly notched, allowing them to connect in a vertical stack, with the "drive" block extending horizontally.

Motor Sensing#

Motor is Done?#

The Motor is Done? block is used to report if the selected IQ Smart Motor or Motor Group has completed its movement.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "Motor7 is done?" with a small downward-facing arrow next to "Motor7," indicating that this can be selected or changed. The shape of the block suggests it is used for a conditional statement.

The Motor is Done? block reports True when the selected Motor or Motor Group has completed its movement.

The Motor is Done? block reports False when the selected Motor or Motor Group is still moving.

Choose which Motor or Motor Group to use.

The image shows a hexagonal light blue block with a yellow outline from a block-based coding environment. The block contains the text "Motor7 is done?" with a small downward-facing arrow next to "Motor7." Below this block is a drop-down menu, also in light blue, with a checkmark next to "Motor7," indicating that this option is currently selected.

Motor is Spinning?#

The Motor is Spinning? block is used to report if the selected IQ Smart Motor or Motor Group is currently moving.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "Motor7 is spinning?" with a small downward-facing arrow next to "Motor7," indicating that this can be selected or changed. The block is used for checking if the motor is currently spinning.

The Motor is Spinning? block reports True when the selected Motor or Motor Group is moving.

The Motor is Spinning? block reports False when the selected Motor or Motor Group is not moving.

Choose which Motor or Motor Group to use.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "Motor7 is spinning?" with a small downward-facing arrow next to "Motor7." Below this block is a drop-down menu, also in light blue, with a checkmark next to "Motor7," indicating that this option is currently selected.

Position of Motor#

The Position of Motor block is used to report the distance an IQ Smart Motor or the first motor of a Motor Group has traveled.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "Motor7 position in degrees," with two small downward-facing arrows next to "Motor7" and "degrees," indicating that both of these options can be selected or changed. The block is likely used to obtain or set the position of Motor7 in degrees.

Choose which Motor or Motor Group to use.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "Motor7 position in degrees," with two small downward-facing arrows next to "Motor7" and "degrees." Below this block is a drop-down menu, also in light blue, with a checkmark next to "Motor7," indicating that this option is currently selected.

Choose the units to report in, degrees or turns.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "Motor7 position in degrees," with two small downward-facing arrows next to "Motor7" and "degrees." Below the block is a drop-down menu with the options "degrees" and "turns," with "degrees" currently selected, indicated by a checkmark.

In this example, the Motor will spin forward for 1 second before its current position is printed on the Brain’s screen.

The image shows a sequence of blocks in a block-based coding environment. It starts with a yellow "when started" block, followed by a gray block that says "Spin Motor7 forward for 1 second." Below it is a blue block that reads "spin Motor7 forward," followed by an orange block that says "wait 1 seconds." Next is a gray block that reads "Print Motor7's current position in degrees." Finally, at the bottom is a purple block that reads "print Motor7 position in degrees on Brain," where "Motor7," "degrees," and "Brain" can be selected or changed through drop-down menus. The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Velocity of Motor#

The Velocity of Motor block is used to report the current velocity of an IQ Smart Motor or the first motor of a Motor Group.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "Motor7 velocity in %" with small downward-facing arrows next to "Motor7" and "%" to indicate that these options can be selected or changed. The block is used to specify or retrieve the velocity of Motor7, expressed as a percentage.

The Velocity of Motor block reports a range from -100% to 100% or -600rpm to 600rpm.

Choose which Motor or Motor Group to use.

The image shows a rounded light blue block from a block-based coding environment with the text "Motor7 velocity in %" and small downward-facing arrows next to "Motor7" and "%." Below this block is a drop-down menu in light blue, with a checkmark next to "Motor7," indicating that this option is currently selected.

Choose the units to report in, percent (%) or rpm.

The image shows a rounded light blue block from a block-based coding environment with the text "Motor7 velocity in %" and small downward-facing arrows next to "Motor7" and "%." Below the block is a drop-down menu with options for "%" and "rpm," with "%" currently selected, as indicated by a checkmark.

In this example, the Motor will spin forward for 1 second before its current velocity is printed on the Brain’s screen.

The image shows a sequence of blocks in a block-based coding environment. The sequence begins with a yellow "when started" block, followed by a gray block that says "Spin Motor7 forward for 1 second." Below it is a blue block that reads "spin Motor7 forward," followed by an orange block that says "wait 1 seconds." Next, there is a gray block that reads "Print Motor7's current velocity in rpm." Finally, at the bottom is a purple block that reads "print Motor7 velocity in rpm on Brain," where "Motor7," "rpm," and "Brain" can be selected or changed through drop-down menus. The blocks are arranged vertically, indicating the order of execution when the program starts.

Current of Motor#

The Current of Motor block is used to report the amount of current a IQ Smart Motor or Motor Group is drawing in amperes (amps).

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "Motor7 current in amps" with small downward-facing arrows next to "Motor7" and "amps," indicating that these options can be selected or changed. The block is used to specify or retrieve the current in amps for Motor7.

Choose which Motor or Motor Group to use.

The image shows a rounded light blue block from a block-based coding environment with the text "Motor7 current in amps" and small downward-facing arrows next to "Motor7" and "amps." Below this block is a drop-down menu in light blue, with a checkmark next to "Motor7," indicating that this option is currently selected.

In this example, the Motor will spin forward for 1 second before its current is printed on the Brain’s screen.

The image shows a sequence of blocks in a block-based coding environment. The sequence begins with a yellow "when started" block, followed by a gray block that says "Spin Motor7 forward for 1 second." Below it is a blue block that reads "spin Motor7 forward," followed by an orange block that says "wait 1 seconds." Next, there is a gray block that reads "Print Motor7's current current in amps." Finally, at the bottom is a purple block that reads "print Motor7 current in amps on Brain," where "Motor7," "amps," and "Brain" can be selected or changed through drop-down menus. The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Drivetrain Sensing#

Drive is Done?#

The Drive is Done? block is used to report if the Drivetrain has completed its movement.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "drive is done?" indicating that it is used to check whether the drive operation has completed.

The Drive is Done? block reports True when the Drivetrain’s motors have completed their movement.

The Drive is Done? block reports False when the Drivetrain’s motors are still moving.

Drive is Moving?#

The Drive is Moving? block is used to report if the Drivetrain is currently moving.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "drive is moving?" indicating that it is used to check whether the drive operation is currently in motion.

The Drive is Moving? block reports True when the Drivetrain’s motors are moving.

The Drive is Moving? block reports False when the Drivetrain’s motors are not moving.

Drive Heading#

The Drive Heading block is used to report the direction that the Drivetrain is facing by using the Inertial sensor’s current angular position.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "drive heading in degrees," indicating that it is used to obtain or specify the heading of the drive in degrees.

The Drive Heading block reports a range from 0.0 to 359.99 degrees.

In this example, the Drivetrain will turn to the right for 1 second before its current heading is printed on the Brain’s screen.

The image shows a sequence of blocks in a block-based coding environment. The sequence starts with a yellow "when started" block, followed by a gray block that says "Turn towards the right for 1 second." Below it is a blue block that reads "turn right," followed by an orange block that says "wait 1 seconds." Next, there is a gray block that reads "Print Drivetrain's current heading after 1 second." Finally, at the bottom is a purple block that reads "print drive heading in degrees on Brain and set cursor to next row," where "drive heading in degrees," "Brain," and "and set cursor to next row" can be selected or changed through drop-down menus. The blocks are arranged vertically, indicating the order of execution when the program starts.

Drive Rotation#

The Drive Rotation block is used to report the Drivetrain’s angle of rotation.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "drive rotation in degrees," indicating that it is used to obtain or specify the rotation of the drive in degrees.

A clockwise direction is reported as a positive value, and a counterclockwise value is reported as a negative value.

In this example, the Drivetrain will turn to the left for 1 second before its current rotation is printed on the Brain’s screen.

The image shows a sequence of blocks in a block-based coding environment. The sequence starts with a yellow "when started" block, followed by a gray block that says "Turn towards the left for 1 second." Below it is a blue block that reads "turn left," followed by an orange block that says "wait 1 seconds." Next, there is a gray block that reads "Print Drivetrain's current rotation after 1 second." Finally, at the bottom is a purple block that reads "print drive rotation in degrees on Brain and set cursor to next row," where "drive rotation in degrees," "Brain," and "and set cursor to next row" can be selected or changed through drop-down menus. The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Drive Velocity#

The Drive Velocity block is used to report the current velocity of the Drivetrain.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "drive velocity in %" with a small downward-facing arrow next to the "%" symbol, indicating that the unit of measurement can be selected or changed. This block is used to specify or retrieve the velocity of the drive expressed as a percentage.

The Drive Velocity block reports a range from -100% to 100% or -600rpm to 600rpm.

Choose the units to report in, percent (%) or rpm.

The image shows a rounded light blue block from a block-based coding environment with the text "drive velocity in %" and a small downward-facing arrow next to the "%." Below the block is a drop-down menu with options for "%" and "rpm," with "%" currently selected, as indicated by a checkmark. The drop-down menu allows the user to choose the unit of measurement for the drive's velocity.

In this example, the Drivetrain will drive forward for 1 second before its current velocity is printed on the Brain’s screen.

The image shows a sequence of blocks in a block-based coding environment. The sequence begins with a yellow "when started" block, followed by a gray block that says "Drive forward for 1 second." Below it is a blue block that reads "drive forward," followed by an orange block that says "wait 1 seconds." Next, there is a gray block that reads "Print Drivetrain's current velocity after 1 second." Finally, at the bottom is a purple block that reads "print drive velocity in % on Brain and set cursor to next row," where "drive velocity in %," "Brain," and "and set cursor to next row" can be selected or changed through drop-down menus. The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Drive Current#

The Drive Current block is used to report the amount of current (power) that the Drivetrain is currently using.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "drive current amps," indicating that it is used to obtain or specify the current in amps for the drive.

In this example, the Drivetrain will drive forward for 1 second before its current is printed on the Brain’s screen.

The image shows a sequence of blocks in a block-based coding environment. The sequence begins with a yellow "when started" block, followed by a gray block that says "Drive forward for 1 second." Below it is a blue block that reads "drive forward," followed by an orange block that says "wait 1 seconds." Next, there is a gray block that reads "Print Drivetrain's current after 1 second." Finally, at the bottom is a purple block that reads "print drive current amps on Brain and set cursor to next row," where "drive current amps," "Brain," and "and set cursor to next row" can be selected or changed through drop-down menus. The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Bumper Sensing#

Bumper Pressed#

The Bumper Pressed block is used to report if the Bumper Switch is pressed.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "Bumper8 pressed?" with a small downward-facing arrow next to "Bumper8," indicating that this option can be selected or changed. The block is used to check whether Bumper8 has been pressed.

The Bumper Pressed block reports True when the selected Bumper Switch is pressed.

The Bumper Pressed block reports False when the selected Bumper Switch is not pressed.

Choose which Bumper Switch to use.

The image shows a hexagonal light blue block from a block-based coding environment with the text "Bumper8 pressed?" and a small downward-facing arrow next to "Bumper8." Below this block is a drop-down menu in light blue, with a checkmark next to "Bumper8," indicating that this option is currently selected. The block is used to check whether Bumper8 has been pressed.

In this example, the Brain will print a message on its screen the first time the Bumper Switch is pressed.

The image shows a sequence of blocks in a block-based coding environment. The sequence begins with a yellow "when started" block, followed by a gray block that says, "Don't print the message until the bumper is pressed." Below it is an orange block that reads "wait until Bumper8 pressed?" where "Bumper8 pressed?" is displayed inside a hexagonal blue block. Finally, at the bottom is a purple block that reads "print Bumper was pressed on Brain," where "Brain" can be selected or changed through a drop-down menu. The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Touch LED Sensing#

Touch LED Pressed#

The Touch LED Pressed block is used to report if the Limit Switch is pressed.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "TouchLED11 pressed?" with a small downward-facing arrow next to "TouchLED11," indicating that this option can be selected or changed. The block is used to check whether TouchLED11 has been pressed.

The Touch LED Pressed block reports True when the selected Touch LED is pressed.

The Touch LED Pressed block reports False when the selected Touch LED is not pressed.

Choose which Limit Switch to use.

The image shows a hexagonal light blue block from a block-based coding environment with the text "TouchLED11 pressed?" and a small downward-facing arrow next to "TouchLED11." Below this block is a drop-down menu in light blue, with a checkmark next to "TouchLED11," indicating that this option is currently selected. The block is used to check whether TouchLED11 has been pressed.

In this example, the Brain will print a message on its screen the first time the Touch LED is pressed.

The image shows a sequence of blocks in a block-based coding environment. The sequence starts with a yellow "when started" block, followed by a gray block that says, "Don't print the message until the Touch LED Sensor is pressed." Below it is an orange block that reads "wait until TouchLED11 pressed?" where "TouchLED11 pressed?" is displayed inside a hexagonal blue block. Finally, at the bottom is a purple block that reads "print Touch LED was pressed on Brain and set cursor to next row," where "Brain" and "and set cursor to next row" can be selected or changed through drop-down menus. The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Gyro Sensing#

Calibrate Gyro#

The Calibrate Gyro block is used to calibrate the VEX IQ Gyro Sensor is used to reduce the amount drift generated by Gyro Sensor.

The Gyro Sensor must remain still during the calibration process.

The image shows a light blue block from a block-based coding environment. The block contains the text "calibrate Gyro5 for 2 seconds," with small downward-facing arrows next to "Gyro5" and "2," indicating that these options can be selected or changed. The block is used to calibrate a gyroscope sensor (Gyro5) for a specified duration of time, in this case, 2 seconds.

Choose which Gyro Sensor to use.

The image shows a light blue block from a block-based coding environment with the text "calibrate Gyro5 for 2 seconds." Below this block is a drop-down menu in light blue, with a checkmark next to "Gyro5," indicating that this option is currently selected. The block is used to calibrate the Gyro5 sensor for a specified duration, in this case, 2 seconds.

Choose how long to calibrate the Gyro Sensor for:

  • 2 seconds

  • 4 seconds

  • 8 seconds

The image shows a light blue block from a block-based coding environment with the text "calibrate Gyro5 for 2 seconds." Below this block is a drop-down menu in light blue with the options "2," "4," and "8" seconds, with "2" currently selected, as indicated by a checkmark. The block is used to calibrate the Gyro5 sensor for the selected duration.

In this example, the Brain’s Gyro Sensor will calibrate for 2 seconds before printing the current heading of the Gyro Sensor.

The image shows a sequence of blocks in a block-based coding environment. The sequence begins with a yellow "when started" block, followed by a light blue block that reads "calibrate Gyro5 for 2 seconds," where "Gyro5" and "2" can be selected or changed through drop-down menus. Below it is a purple block that reads "print Gyro5 heading in degrees on Brain and set cursor to next row," where "Gyro5," "Brain," and "and set cursor to next row" can be selected or changed through drop-down menus. The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Set Heading#

The Set Heading block is used to set the Gyro Sensor current heading position to the specified value.

The image shows a light blue block from a block-based coding environment. The block contains the text "set Gyro5 heading to 0 degrees," with a small downward-facing arrow next to "Gyro5" and a circular input field displaying "0," indicating that these options can be selected or changed. The block is used to set the heading of the Gyro5 sensor to a specific degree value, in this case, 0 degrees.

The Set Heading block accepts a range of 0.0 to 359.99 degrees.

Choose which Gyro Sensor to use.

The image shows a light blue block from a block-based coding environment with the text "set Gyro5 heading to 0 degrees." Below this block is a drop-down menu in light blue, with a checkmark next to "Gyro5," indicating that this option is currently selected. The block is used to set the heading of the Gyro5 sensor to a specific degree value, in this case, 0 degrees.

In this example, the Brain’s Gyro sensor will print its starting heading, set its heading to 90 degrees, and then print the new heading.

The image shows a sequence of blocks in a block-based coding environment. The sequence starts with a yellow "when started" block, followed by a purple block that reads "print Gyro5 heading in degrees on Brain and set cursor to next row." Next is a light blue block that reads "set Gyro5 heading to 90 degrees," where "Gyro5" and "90" can be selected or changed through drop-down menus. Finally, there is another purple block that reads "print Gyro5 heading in degrees on Brain and set cursor to next row." The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Set Rotation#

The Set Rotation block is used to set the Gyro Sensor current rotation position to the specified value.

The image shows a light blue block from a block-based coding environment. The block contains the text "set Gyro5 rotation to 0 degrees," with a small downward-facing arrow next to "Gyro5" and a circular input field displaying "0," indicating that these options can be selected or changed. The block is used to set the rotation of the Gyro5 sensor to a specific degree value, in this case, 0 degrees.

The Set Rotation block accepts any positive or negative decimal or integer number.

Choose which Gyro Sensor to use.

The image shows a light blue block from a block-based coding environment with the text "set Gyro5 rotation to 0 degrees." Below this block is a drop-down menu in light blue, with a checkmark next to "Gyro5," indicating that this option is currently selected. The block is used to set the rotation of the Gyro5 sensor to a specific degree value, in this case, 0 degrees.

In this example, the Brain’s Gyro Sensor will print its starting rotation, set its rotation to -100 degrees, and then print the new rotation.

The image shows a sequence of blocks in a block-based coding environment. The sequence starts with a yellow "when started" block, followed by a purple block that reads "print Gyro5 rotation in degrees on Brain and set cursor to next row." Next is a light blue block that reads "set Gyro5 rotation to -100 degrees," where "Gyro5" and "-100" can be selected or changed through drop-down menus. Finally, there is another purple block that reads "print Gyro5 rotation in degrees on Brain and set cursor to next row." The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Angle of Heading#

The Angle of Heading block is used to report the VEX IQ Gyro Sensor’s current heading in degrees.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "Gyro5 heading in degrees," with a small downward-facing arrow next to "Gyro5," indicating that this option can be selected or changed. The block is used to obtain or specify the heading of the Gyro5 sensor in degrees.

The Angle of Heading block reports a range from 0.0 to 359.99 degrees.

Choose which Gyro Sensor to use.

The image shows a rounded light blue block from a block-based coding environment with the text "Gyro5 heading in degrees." Below this block is a drop-down menu in light blue, with a checkmark next to "Gyro5," indicating that this option is currently selected. The block is used to obtain or specify the heading of the Gyro5 sensor in degrees.

In this example, the Brain’s Gyro sensor will print its starting heading, set its heading to 90 degrees, and then print the new heading.

The image shows a sequence of blocks in a block-based coding environment. The sequence starts with a yellow "when started" block, followed by a purple block that reads "print Gyro5 heading in degrees on Brain and set cursor to next row." Next is a light blue block that reads "set Gyro5 heading to 90 degrees," where "Gyro5" and "90" can be selected or changed through drop-down menus. Finally, there is another purple block that reads "print Gyro5 heading in degrees on Brain and set cursor to next row." The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Angle of Rotation#

The Angle of Rotation block is used to report the VEX IQ Gyro Sensor’s current rotation in degrees.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "Gyro5 rotation in degrees," with a small downward-facing arrow next to "Gyro5," indicating that this option can be selected or changed. The block is used to obtain or specify the rotation of the Gyro5 sensor in degrees.

A clockwise direction is reported as a positive value, and a counterclockwise value is reported as a negative value.

Choose which Gyro Sensor to use.

The image shows a rounded light blue block from a block-based coding environment with the text "Gyro5 rotation in degrees." Below this block is a drop-down menu in light blue, with a checkmark next to "Gyro5," indicating that this option is currently selected. The block is used to obtain or specify the rotation of the Gyro5 sensor in degrees.

In this example, the Brain’s Inertial sensor will print its starting rotation, set its rotation to -100 degrees, and then print the new heading.

The image shows a sequence of blocks in a block-based coding environment. The sequence starts with a yellow "when started" block, followed by a purple block that reads "print Gyro5 rotation in degrees on Brain and set cursor to next row." Next is a light blue block that reads "set Gyro5 rotation to -100 degrees," where "Gyro5" and "-100" can be selected or changed through drop-down menus. Finally, there is another purple block that reads "print Gyro5 rotation in degrees on Brain and set cursor to next row." The blocks are connected in a vertical sequence, indicating the order of execution when the program starts.

Rate of Gyro#

The Rate of Gyro block is used to report the VEX IQ Gyro Sensor’s rate of angular velocity.

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "Gyro5 rate in dps," with a small downward-facing arrow next to "Gyro5," indicating that this option can be selected or changed. The block is used to obtain or specify the rotation rate of the Gyro5 sensor in degrees per second (dps).

The Rate of Gyro block reports a range between 0 to 249.99 dps.

Choose which Gyro Sensor to use.

The image shows a rounded light blue block from a block-based coding environment with the text "Gyro5 rate in dps." Below this block is a drop-down menu in light blue, with a checkmark next to "Gyro5," indicating that this option is currently selected. The block is used to obtain or specify the rotation rate of the Gyro5 sensor in degrees per second (dps).

In this example, the current Gyro Sensor’s rate will be printed to the Brain’s screen.

rate_of_gyro_example

Color Sensing#

Color Sensor Found an Object#

The Color Sensor Found an Object block is used to report if the VEX IQ Color Sensor detects an object.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "Color12 found an object?" with a small downward-facing arrow next to "Color12," indicating that this option can be selected or changed. The block is used to check if the selected Color12 sensor has detected an object.

The Color Sensor Found an Object block reports True when the Color Sensor detects an object or surface close to the front of the sensor.

The Color Sensor Found an Object block reports False when the Color Sensor does not detect an object or surface close to the front of the sensor.

Choose which Color Sensor to use.

The image shows a hexagonal light blue block from a block-based coding environment with the text "Color12 found an object?" Below this block is a drop-down menu in light blue, with a checkmark next to "Color12," indicating that this option is currently selected. The block is used to check if the selected Color12 sensor has detected an object.

In this example, when the Color Sensor detects an object, it will print a message to the Brain.

A block-based programming interface showing a sequence of steps: 'when started', 'Don't print the message until the Color Sensor detects an object.', 'wait until Color12 found an object?', and 'print "Color Sensor detected an object." on Brain and set cursor to next row'"

Color Sensor Detects Color#

The Color Sensor Detects Color block is used to report if the VEX IQ Color Sensor detects a specific color.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text 'Color12 detects red ?' with small downward-facing arrows next to 'Color12' and 'red,' indicating that these options can be selected or changed. The block is used to check whether Color12 detects the color red.

The Color Sensor Detects Color block reports True when the Color Sensor detects the selected color.

The Color Sensor Detects Color block reports False when the Color Sensor detects a different color than the one selected.

Choose which Color Sensor to use.

The image shows two connected elements from a block-based coding environment. The top element is a hexagonal light blue block containing the text 'Color12 detects red ?' with small downward-facing arrows next to 'Color12' and 'red,' indicating these options can be selected or changed. Below this, connected by a small arrow, is a rectangular light blue block with rounded corners. This lower block contains the text 'Color12' preceded by a checkmark, suggesting it's a selected option from the dropdown menu in the block above. The arrangement demonstrates how options are selected in this coding interface.

Choose which color to detect.

The image shows two connected elements from a block-based coding environment. The bottom element is a hexagonal light blue block containing the text "Color12 detects red ?" with small downward-facing arrows next to "Color12" and "red," indicating these options can be selected or changed. Above this, connected by a small arrow, is a light blue rectangular dropdown menu. The menu lists color options: "purple", "red violet", "violet", "blue violet", and "blue green". This dropdown appears to be expanding from the "red" option in the hexagonal block, showing the available color choices that can be selected for the detection condition.

In this example, when the Color Sensor detects the color green, it will print the detected color to the Brain.

The image shows a sequence of blocks from a block-based coding environment. At the top is a yellow rounded block with "when started" text. Below it, a light gray rectangular block contains the instruction "Don't print the message until the Color Sensor detects an the color green." Next, an orange block labeled "wait until" is connected to a light blue hexagonal block containing "Color12 detects green ?" with dropdown arrows. At the bottom, a purple block labeled "print" is followed by light blue elements reading "Color12 color name", then "on", "Brain" with a dropdown arrow, and "and set cursor to next row". This block sequence creates a program that waits for a color sensor to detect green before printing the detected color name.

Color Sensor Color Name#

The Color Sensor Color Name block is used to report the name of the color detected by the VEX IQ Color Sensor.

The image shows a light blue rounded block from a block-based coding environment. The block has a pill-like or capsule shape with fully rounded ends. It contains the text "Color12 color name" with a small downward-facing arrow next to "Color12," indicating that this option can be selected or changed. This block appears to be used to retrieve or display the name of the color detected by the Color12 sensor.

Choose which Color Sensor to use.

The image shows two connected elements from a block-based coding environment. The top element is a light blue pill-shaped block containing the text "Color12 color name" with a small downward-facing arrow next to "Color12," indicating this option can be selected or changed. Below this, connected by a small arrow, is a rectangular light blue block with rounded corners. This lower block contains the text "Color12" preceded by a checkmark, suggesting it's a selected option from the dropdown menu in the block above. The arrangement demonstrates how options are selected in this coding interface for the Color12 sensor's color name function.

In this example, when the Color Sensor detects the color green, it will print the detected color to the Brain.

The image shows a block-based coding sequence. A yellow "when started" block initiates the program. Below, a gray comment block reads "Don't print the message until the Color Sensor detects an the color green." An orange "wait until" block connects to a blue hexagonal "Color12 detects green ?" condition. Finally, a purple "print" block includes "Color12 color name" and "Brain" options, ending with "and set cursor to next row." This program waits for Color12 to detect green before printing the color name on the Brain display.

Color Sensor Brightness#

The Color Sensor Brightness block is used to report the amount of light detected by the VEX IQ Color Sensor.

The image shows a light blue pill-shaped block from a block-based coding environment. The block contains the text "Color12 brightness in %" with a small downward-facing arrow next to "Color12," indicating this option can be selected or changed. This block appears to be used to retrieve or display the brightness percentage detected by the Color12 sensor.

Choose which Color Sensor to use.

The image shows two connected elements from a block-based coding environment. The top element is a light blue pill-shaped block containing the text "Color12 brightness in %" with a small downward-facing arrow next to "Color12," indicating this option can be selected or changed. Below this, connected by a small arrow, is a rectangular light blue block with rounded corners. This lower block contains the text "Color12" preceded by a checkmark, suggesting it's a selected option from the dropdown menu in the block above. The arrangement demonstrates how options are selected in this coding interface for the Color12 sensor's brightness function.

In this example, the Color Sensor will print the current brightness to the Brain.

The image shows a sequence of blocks from a block-based coding environment. At the top is a yellow rounded block with "when started" text. Connected below is a long purple block labeled "print" containing several elements: a light blue pill-shaped block with "Color12 brightness in %" and a dropdown arrow, followed by "on", then "Brain" with a dropdown arrow, and finally "and set cursor to next row". This sequence creates a program that, when started, prints the brightness percentage detected by the Color12 sensor on the Brain display and moves to the next line.

Color Sensor Hue#

The Color Sensor Hue block is used to report the hue of the color detected by the VEX IQ Color Sensor.

The Color Sensor Hue block reports a range from 0 to 360.

The image shows a light blue pill-shaped block from a block-based coding environment. The block contains the text "Color12 hue in degrees" with a small downward-facing arrow next to "Color12," indicating this option can be selected or changed. This block appears to be used to retrieve or display the hue value in degrees detected by the Color12 sensor.

Choose which Color Sensor to use.

The image shows two connected elements from a block-based coding environment. The top element is a light blue pill-shaped block containing the text "Color12 hue in degrees" with a small downward-facing arrow next to "Color12," indicating this option can be selected or changed. Below this, connected by a small arrow, is a rectangular light blue block with rounded corners. This lower block contains the text "Color12" preceded by a checkmark, suggesting it's a selected option from the dropdown menu in the block above. The arrangement demonstrates how options are selected in this coding interface for the Color12 sensor's hue measurement function.

In this example, the Color Sensor will print the current brightness to the Brain.

The image shows a sequence of blocks from a block-based coding environment. At the top is a yellow rounded block with "when started" text. Connected below is a long purple block labeled "print" containing several elements: a light blue pill-shaped block with "Color12 hue in degrees" and a dropdown arrow, followed by "on", then "Brain" with a dropdown arrow, and finally "and set cursor to next row". This sequence creates a program that, when started, prints the hue value in degrees detected by the Color12 sensor on the Brain display and moves to the next line.

Distance Sensing#

IQ (1st gen)#

To use the IQ (1st gen) Distance Sensing blocks, you must be using an IQ (1st gen) Distance Sensor.

Distance Sensor Found Object#

The Distance Sensor Found Object block is used to report if the Distance Sensor sees an object within its field of view.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "Distance9 found an object?" with a small downward-facing arrow next to "Distance9," indicating that this option can be selected or changed. This block is likely used to check whether the Distance9 sensor has detected an object, returning a boolean (true/false) value.

The Distance Sensor Found Object block reports True when the Distance Sensor sees an object or surface within its field of view.

The Distance Sensor Found Object block reports False when the Distance Sensor does not detect an object or surface.

Choose which Distance Sensor to use.

The image shows two connected elements from a block-based coding environment. The top element is a hexagonal light blue block containing the text "Distance9 found an object?" with a small downward-facing arrow next to "Distance9," indicating this option can be selected or changed. Below this, connected by a small arrow, is a rectangular light blue block with rounded corners. This lower block contains the text "Distance9" preceded by a checkmark, suggesting it's a selected option from the dropdown menu in the block above. The arrangement demonstrates how options are selected in this coding interface for the Distance9 sensor's object detection function.

In this example, when the Distance Sensor detects an object, it will print the distance between it and the detected object.

The image shows a block-based coding sequence. A yellow "when started" block initiates the program. Below, a gray comment block reads "Don't print the message until the Distance Sensor detects an object." An orange "wait until" block connects to a blue hexagonal "Distance9 found an object?" condition. Finally, a purple "print" block includes "Distance9 distance in mm" and "Brain" options, ending with "and set cursor to next row". This program waits for Distance9 to detect an object before printing the distance measurement on the Brain display.

Object Distance#

The Object Distance block is used to report the distance of the nearest object from the Distance Sensor.

The image shows a light blue pill-shaped block from a block-based coding environment. The block contains three segments: "Distance9" with a small downward-facing arrow, indicating it can be selected or changed; "distance in" as static text; and "mm" also with a downward-facing arrow, suggesting the unit of measurement can be modified. This block appears to be used to retrieve or display the distance measured by the Distance9 sensor in millimeters, with options to change the sensor and the unit of measurement.

The Object Distance block reports a range from 24mm to 1000mm or 1 inch to 40 inches.

Choose which Distance Sensor to use.

The image shows a hexagonal light blue block from a block-based coding environment with the text "Color12 found an object?" Below this block is a drop-down menu in light blue, with a checkmark next to "Color12," indicating that this option is currently selected. The block is used to check if the selected Color12 sensor has detected an object.

Choose what units to report in: millimeters (mm) or inches.

The image shows two connected elements from a block-based coding environment. The top element is a light blue pill-shaped block containing three segments: "Distance9" with a downward-facing arrow, "distance in" as static text, and "mm" with another downward-facing arrow. Below this, connected by a small arrow to the "mm" segment, is a rectangular light blue block with rounded corners. This lower block displays a dropdown menu with two options: "mm" (millimeters) with a checkmark, indicating it's currently selected, and "inches" below it. This arrangement demonstrates how the unit of measurement for the Distance9 sensor can be selected, offering a choice between millimeters and inches.

In this example, when the Distance Sensor detects an object, it will print the distance between it and the detected object.

The image shows a block-based coding sequence. A yellow "when started" block initiates the program. Below, a gray comment block reads "Don't print the message until the Distance Sensor detects an object." An orange "wait until" block connects to a blue hexagonal "Distance9 found an object?" condition. Finally, a purple "print" block includes "Distance9 distance in mm" and "Brain" options, ending with "and set cursor to next row". This program waits for Distance9 to detect an object before printing the distance measurement in millimeters on the Brain display and moving to the next line.

IQ (2nd gen)#

To use the IQ (2nd gen) Distance Sensing blocks, you must be using an IQ (2nd gen) Distance Sensor.

Object Distance#

The Object Distance block is used to report the distance of the nearest object from the Distance Sensor.

The image shows a light blue pill-shaped block from a block-based coding environment. The block contains three segments: "Distance10" with a small downward-facing arrow, indicating it can be selected or changed; "object distance in" as static text; and "mm" also with a downward-facing arrow, suggesting the unit of measurement can be modified. This block appears to be used to retrieve or display the object distance measured by the Distance10 sensor in millimeters, with options to change the sensor and the unit of measurement.

The Object Distance block reports a range from 20mm to 2000mm.

Choose which Distance Sensor to use.

The image shows two connected elements from a block-based coding environment. The top element is a light blue pill-shaped block containing three segments: "Distance10" with a downward-facing arrow, "object distance in" as static text, and "mm" with another downward-facing arrow. Below this, connected by a small arrow, is a rectangular light blue block with rounded corners. This lower block contains the text "Distance10" preceded by a checkmark, suggesting it's a selected option from the dropdown menu in the block above. The arrangement demonstrates how the Distance10 sensor is selected for measuring object distance, likely in millimeters, in this coding interface.

Choose what units to report in: millimeters (mm) or inches.

The image shows two connected elements from a block-based coding environment. The top element is a light blue pill-shaped block containing three segments: "Distance10" with a downward-facing arrow, "object distance in" as static text, and "mm" with another downward-facing arrow. Below this, connected by a small arrow to the "mm" segment, is a rectangular light blue block with rounded corners. This lower block displays a dropdown menu with two options: "mm" (millimeters) with a checkmark, indicating it's currently selected, and "inches" below it. This arrangement demonstrates how the unit of measurement for the Distance10 sensor's object distance can be selected, offering a choice between millimeters and inches.

In this example, when the Distance Sensor detects an object, it will print the distance between it and the detected object.

The image shows a block-based coding sequence for a robotics or sensor program. It starts with a yellow "when started" block, followed by a gray comment block explaining the program's purpose. An orange "wait until" block uses the Distance10 sensor to detect an object. Once detected, a purple "print" block displays the object's distance in millimeters on a device called "Brain". This sequence creates a simple program that waits for an object to be detected before measuring and displaying its distance.

Object Velocity#

The Object Velocity block is used to report the current velocity of an object in meters per second (m/s).

The image shows a rounded light blue block from a block-based coding environment. The block contains the text "Distance10 object velocity in m/s," with a small downward-facing arrow next to "Distance10," indicating that this option can be selected or changed. The block is used to obtain or specify the velocity of an object detected by the Distance10 sensor, measured in meters per second (m/s).

Choose which Distance Sensor to use.

The image shows a rounded light blue block from a block-based coding environment with the text "Distance10 object velocity in m/s." Below this block is a drop-down menu in light blue, with a checkmark next to "Distance10," indicating that this option is currently selected. The block is used to obtain or specify the velocity of an object detected by the Distance10 sensor, measured in meters per second (m/s).

In this example, the Distance Sensor will report the current velocity of an object moving in front of it.

The image shows a sequence of blocks in a block-based coding environment. The sequence begins with a yellow "when started" block, followed by a purple block that reads "print Distance10 object velocity in m/s on Brain and set cursor to next row." The block is used to print the velocity of an object detected by the Distance10 sensor, measured in meters per second (m/s), on the Brain interface, with the option to move the cursor to the next row after printing. The blocks are connected in a horizontal sequence, indicating that the action will take place when the program starts.

Object Size Is#

The Object Size Is block is used to report if the Distance Sensor detects the specified object size.

The image shows a hexagonal light blue block from a block-based coding environment. The block contains the text "Distance10 object size is small?" with two drop-down menus, one next to "Distance10" and the other next to "small," indicating that these options can be selected or changed. The block is used to check if the object detected by the Distance10 sensor matches the specified size, which in this case is set to "small.

The Distance Sensor determines the size of the object detected (none, small, medium, large) based on the amount of light reflected and returned to the sensor.

The Object Size Is block reports True when the Distance Sensor detects the specified size.

The Object Size Is block reports False when the Distance Sensor doesn’t detect the specified size.

Choose which Distance Sensor to use.

The image shows a hexagonal light blue block from a block-based coding environment with the text "Distance10 object size is small?" Below this block is a drop-down menu in light blue, with a checkmark next to "Distance10," indicating that this option is currently selected. The block is used to check if the object detected by the Distance10 sensor matches the specified size, which in this case is set to "small.

Choose which size of the object you want the Object Sensor to check for.

  • small

  • medium

  • large

The image shows a hexagonal light blue block from a block-based coding environment with the text "Distance10 object size is small?" Below this block is a drop-down menu in light blue, showing options for "small," "medium," and "large," with "small" currently selected. The block is used to check if the object detected by the Distance10 sensor matches the specified size, which can be selected from the available options.

In this example, if the Distance Sensor detects a small object, it will drive forward until the object is large.

The image shows a set of coding blocks arranged to instruct a robot or device to perform specific actions based on the size of an object detected by a distance sensor. The sequence begins with the "when started" block, followed by a condition that checks if the distance sensor detects a small object. If a small object is detected, the robot drives forward. It continues to drive forward until the detected object becomes large, at which point the robot stops driving. The condition blocks are hexagonal, with the distance sensor blocks and the actions nested within them.

Distance Sensor Found Object#

The Distance Sensor Found Object block is used to report if the Distance Sensor sees an object within its field of view.

The image shows a hexagonal block labeled "Distance10 found an object?" This block appears to be used in a coding environment to check if a distance sensor, named "Distance10," has detected an object.

The Distance Sensor Found Object block reports True when the Distance Sensor sees an object or surface within its field of view.

The Distance Sensor Found Object block reports False when the Distance Sensor does not detect an object or surface.

Choose which Distance Sensor to use.

The image displays a hexagonal block labeled "Distance10 found an object?" with a dropdown menu open below it showing "Distance10" as a selected option. This block is likely used in a coding environment to check whether the distance sensor, named "Distance10," has detected an object.

In this example, when the Distance Sensor detects an object, it will print the distance between it and the detected object.

The image shows a sequence of blocks that instruct a program to wait until the distance sensor, labeled "Distance10," detects an object before printing the object's distance in millimeters on the Brain's display. The first block sets the condition "when started," and the next blocks contain the conditional statement and the print action. The hexagonal block checks if "Distance10" has found an object.

Optical Sensing#

Set Optical Light#

The Set Optical Light block is used to set the light on the Optical Sensor to on or off. The light lets the Optical Sensor see objects if it is looking at an object in a dark area.

The image shows a block that is used in a coding environment to set the light on an Optical Sensor. The block is rectangular with a slight notch on the left side, indicating that it can be connected to other blocks in a sequence. The block's text reads, "set Optical7 light on," where "Optical7" is a dropdown menu that allows selecting different optical sensors, and "on" is also a dropdown menu that lets you choose between turning the light "on" or "off.

Choose which Optical Sensor to use.

The image shows a coding block used to control the light of an Optical Sensor. The block is rectangular with a slight notch on the left side, allowing it to connect with other blocks. The block's text reads "set Optical7 light on," where "Optical7" is selected from a dropdown menu that lists available optical sensors, and "on" is another dropdown menu option that allows the light to be turned "on" or "off." The dropdown menu for "Optical7" is currently open, showing the selected sensor.

Choose whether to turn the light on or off.

The image displays a rectangular coding block with a slight notch on the left, which is used to control the light of an Optical Sensor. The text within the block reads "set Optical7 light on." "Optical7" is selected from a dropdown menu, and the word "on" is also chosen from another dropdown menu that controls the state of the light. In the image, the dropdown menu for the light state is open, showing "on" as the selected option with "off" as an alternative choice.

In this example, the Optical Sensor will turn its light on for two seconds before turning it off.

The image displays a simple program made of three blocks stacked vertically. The program begins with a yellow hexagonal block labeled "when started." Directly underneath, there is a blue rectangular block with slight notches on the left, containing the command "set Optical7 light on." Following this, there is a yellow rounded block with a slight notch on the left that says "wait 2 seconds." Finally, the program ends with another blue rectangular block, similar to the first, with the command "set Optical7 light off." This sequence sets the light of the Optical7 sensor to turn on, waits for two seconds, and then turns the light off.

Set Optical Light Power#

The Set Optical Light Power block is used to set the light power of Optical sensor

The image displays a blue rectangular block with slight notches on the left side. The block contains the command "set Optical7 light power to 50%." This block is used to adjust the light power of the Optical7 sensor to 50 percent.

The The Set Optical Light Power block block accepts a range of 0% to 100%. This will change the brightness of the light on the Optical Sensor. If the light is off, this block will turn the light on.

Choose which Optical Sensor to use.

The image shows a blue rectangular block with slight notches on the left side. The block contains the command "set Optical7 light power to 50%," with a dropdown menu open for selecting the device (Optical7 in this case). This block is used to adjust the light power of the Optical7 sensor to 50 percent.

In this example, the Optical Sensor’s power light is set to 75% before it waits to detect an object to print a message.

The image shows a sequence of blocks designed to control a robot's sensor and light settings. The yellow "when started" block initiates the sequence. The first block sets the eye light power to 75%. The program then waits until the Eye Sensor detects an object, indicated by the hexagonal block that checks if the "eye found an object." Once an object is detected, the program prints "Object detected." The last purple block ensures the message is printed on the Brain display and moves the cursor to the next row.

Optical Sensor Found Object#

The Optical Sensor Found Object block is used to report if the Optical Sensor detects an object close to it.

The image shows a hexagonal block with the text "eye found an object?" This block is used in a program to check if the Eye Sensor on a robot has detected an object.

The Optical Sensor Found Object block reports True when the Optical Sensor detects an object close to it.

The Optical Sensor Found Object block reports False when an object is not within range of the Optical Sensor.

Choose which Optical Sensor to use.

The image shows a hexagonal block with a dropdown menu. The block reads "Optical7 found an object?" with the "Optical7" dropdown currently selected. This block is used in programming to check if the Optical Sensor labeled "Optical7" has detected an object.

In this example, the Optical Sensor’s power light is set to 75% before it waits to detect an object to print a message.

The image shows a sequence of blocks used in a programming environment. The program starts with a "when started" block, followed by a block that sets the "eye" light power to 75%. The program then waits until the Eye Sensor detects an object. Once detected, the program waits until the eye sensor confirms that an object is found and then prints the message "Object detected" on the Brain screen, setting the cursor to the next row afterward.

Optical Sensor Detects Color#

The Optical Sensor Detects Color block is used to report if the Optical Sensor detects the specified color.

The image shows a hexagonal block labeled "eye detects red?" This block is used in a programming environment to check if the Eye Sensor detects the color red. The block has a dropdown menu to select different colors for detection.

The Optical Sensor Detects Color block reports True when the Optical Sensor detects the specified color.

The Optical Sensor Detects Color block reports False when the Optical Sensor doesn’t detect the specified color.

Choose which Optical Sensor to use.

The image shows a hexagonal block labeled "Optical7 detects red?" with a dropdown menu. The dropdown menu is currently open, showing "Optical7" as the selected sensor. This block is used to check if the specified Optical Sensor (Optical7) detects the color red, with the option to change the color and the sensor via dropdown menus.

Choose which color the Optical Sensor will check for.

The image shows a hexagonal block labeled "eye detects red?" with a dropdown menu. The dropdown menu is currently open, displaying color options "red," "green," and "blue," with "red" selected. This block is used to check if the Eye Sensor detects a specific color, with the ability to change the detected color via the dropdown menu.

In this example, the Optical Sensor will wait until it detects a red color before printing the color on the Brain’s screen.

The image shows a sequence of blocks used in a program. The sequence starts with a yellow "when started" block. Underneath, a gray block reads, "Wait until the Eye Sensor detects the color red." This is followed by an orange hexagonal block with the text "wait until eye detects red?" which checks if the Eye Sensor detects the color red. Finally, a purple block prints the message "Color red detected." and sets the cursor to the next row. This program waits until the Eye Sensor detects the color red and then prints a confirmation message.

Optical Sensor Color Name#

The Optical Sensor Color Name block is used to report the name of the color detected by the VEX IQ Optical Sensor.

The image shows a rounded blue block labeled "Optical4 color name," with a dropdown menu next to "Optical4" that allows the user to select the sensor device. The block is used to retrieve the name of the color detected by the Optical Sensor.

The Color Name block reports one the following colors:

  • red

  • green

  • blue

  • yellow

  • orange

  • purple

  • cyan

Choose which Optical Sensor to use.

The image shows a rounded blue block labeled "Optical4 color name," with a dropdown menu that is currently set to "Optical4." The dropdown menu is highlighted, indicating that it is currently selected and allows the user to choose the specific sensor device for retrieving the color name detected by the Optical Sensor.

In this example, the Optical Sensor will wait until it detects a red color before printing the color on the Brain’s screen.

The image shows a set of blocks forming a program in a visual coding environment. The program starts with the "when started" block, which initiates the sequence. The first block after the start command is a grey block that instructs the program to wait until the Eye Sensor detects the color red. Following this, there is a yellow hexagonal block labeled "wait until eye detects red?" indicating a conditional statement that pauses the program until the Eye Sensor detects the color red. Finally, a purple block labeled "print Color red detected. on Brain and set cursor to next row" is used to print the message "Color red detected." on the device's Brain when the condition is met.

Optical Brightness#

The Optical Brightness block is used to report the amount of light detected by the Optical Sensor.

The image shows a rounded blue block labeled "eye brightness in %." This block appears to be used in a visual coding environment to get or set the brightness level of a sensor or device referred to as "eye," with the brightness value expressed as a percentage.

The Optical Brightness block reports a number value from 0% to 100%.

A large amount of light detected will report a high brightness value.

A small amount of light detected will report a low brightness value.

Choose which Optical Sensor to use.

The image shows a rounded blue block labeled "Optical7 brightness in %." This block appears to be part of a visual coding interface, where "Optical7" is selected from a dropdown menu, and the block is likely used to get or set the brightness level of an optical sensor named "Optical7," with the brightness value expressed as a percentage.

In this example, the Optical Sensor will print the current brightness value to the Brain’s screen.

The image shows a sequence of blocks from a visual coding interface. The sequence starts with a yellow "when started" block, indicating that the following actions will occur when the program begins. Attached to this is a purple "print" block with the text "eye brightness in %" inside a blue rounded block. This likely means that when the program starts, it will print the brightness percentage of the "eye" sensor and set the cursor to the next row on the display.

Optical Hue#

The Optical Hue block is used to report the hue of the color of an object.

The image shows a blue rounded block with the text "eye hue in degrees" written inside. This block is likely used in a visual coding interface to represent or measure the hue of a color detected by an "eye" sensor, expressed in degrees.

The Optical Hue block reports a number value that is the hue of the color of an object. It returns a number between 0 and 359.

The value can be thought of as the location of the color on a color wheel in degrees.

Choose which Optical Sensor to use.

The image shows a blue rounded block with the text "Optical7 hue in degrees" and a dropdown menu that allows you to select "Optical7" as the sensor. This block is used in a visual coding interface to measure or display the hue detected by the "Optical7" sensor in degrees. The dropdown menu is expanded, showing the "Optical7" option selected.

In this example, the Optical Sensor will print the currently seen hue to the Brain’s screen.

The image shows a simple visual coding block sequence. The sequence starts with a yellow block labeled "when started," followed by a purple block that contains the action to "print eye hue in degrees and set cursor to next row." This sequence indicates that when the program begins, it will print the hue detected by the eye sensor in degrees and move the cursor to the next row for any subsequent output.

Vision Sensing#

Take Vision Sensor Snapshot#

The Take Vision Sensor Snapshot block is used to take a snapshot from the Vision Sensor.

The image shows a block from a visual coding environment. The block is rectangular with slightly rounded edges and is used to take a snapshot using a vision sensor labeled as "Vision1." The block is set to take a snapshot of a selected signature, which is indicated by the placeholder text "SELECT_A_SIG." This block is likely used to capture visual data for further processing in a program.

The Take Vision Sensor Snapshot block will capture the current image from the Vision Sensor to be processed and analyzed for color signatures and codes.

A snapshot is required first before using any other Vision Sensor blocks.

Choose which Vision Sensor to use.

The image shows a block from a visual coding environment. The block is rectangular with slightly rounded edges and is used to take a snapshot using a vision sensor labeled as "Vision1." Below the block, there is a dropdown menu that allows the user to select "Vision1" as the active sensor for taking the snapshot. The block is also set to take a snapshot of a selected signature, which is indicated by the placeholder text "SELECT_A_SIG." This block is typically used in a program to capture visual data from a specific sensor.

Select which vision signature to use. Vision signatures are configured from the Devices window.

The image shows a block from a visual coding environment with a dropdown menu expanded. The block is rectangular with slightly rounded edges and is used to take a snapshot using a vision sensor labeled as "Vision1." The dropdown menu below "snapshot of SELECT_A_SIG" is expanded, revealing two options: "REDBLOCK" and "GREENBLOCK." This block allows the user to select a specific signature (such as "REDBLOCK" or "GREENBLOCK") for the vision sensor to capture in the snapshot.

Set Vision Sensor Object Item#

The Set Vision Sensor Object Item block is used to set the object item (of the object you want to learn more information about) out of the number of objects detected.

The image shows a rectangular block with slightly rounded edges from a visual coding environment. The block is used to set a specific object item for a vision sensor labeled as "Vision1." The block has a dropdown menu next to "Vision1" and an option set to "1" next to "object item to." This block allows the user to define which object item the vision sensor should track or reference during the program's execution.

Choose which Vision Sensor to use.

The image displays a block from a visual coding environment, with slightly rounded edges, used to set a specific object item for a vision sensor labeled "Vision1." The dropdown menu is expanded, showing that "Vision1" is selected. The block is configured to set the object item to "1." This block allows the user to specify which object item the vision sensor should track or reference during the program's execution.

Vision Sensor Object Count#

The Vision Sensor Object Count block is used to report how many objects the Vision Sensor detects.

The image shows a block from a visual coding environment, which is slightly rounded in shape. The block is configured for a vision sensor labeled "Vision1" and is set to retrieve the "object count." This block is used to get the number of objects detected by the vision sensor during the program's execution.

The Take Vision Sensor Snapshot block will need to be used before the Vision Sensor Object Count block reports a number of objects.

The Vision Sensor Object Count block will only detect the number of objects from the last snapshot signature.

Choose which Vision Sensor to use.

The image shows a rounded block from a visual coding environment. This block is set up to use a vision sensor labeled "Vision1" to retrieve the "object count." The dropdown menu is open, showing the selection of the sensor. The block is used to count the number of objects detected by the specified vision sensor during the program's execution.

Vision Sensor Object Exists?#

The Vision Sensor Object Exists? block is used to report if the Vision Sensor detects a configured object.

The image shows a hexagonal block from a visual coding environment. This block is set up to check if an object exists as detected by a vision sensor labeled "Vision1." The block likely returns a boolean value, indicating whether an object is present in the sensor's view during the program's execution.

The Take Vision Sensor Snapshot block will need to be used before the Vision Sensor Object Exists? block can detect any configured objects.

The Vision Sensor Object Exists? block reports True when the Vision Sensor detects a configured object.

The Vision Sensor Object Exists? block reports False when the Vision Sensor does not detect a configured object.

Choose which Vision Sensor to use.

The image shows a hexagonal block in a visual coding environment. The block is configured to check if an object exists, as detected by a vision sensor labeled "Vision1." The dropdown menu is open, confirming "Vision1" is selected. This block likely returns a boolean value indicating whether the vision sensor has identified an object within its field of view.

Vision Sensor Object#

The Vision Sensor Object block is used to report information about a detected object from the Vision Sensor.

The image displays a rounded block in a visual coding environment. The block is configured to retrieve the width of an object detected by a vision sensor labeled "Vision1." The dropdown menu on the block allows selection of specific object properties, and in this case, "width" is selected. This block likely returns a numeric value representing the width of the detected object as measured by the sensor.

The Take Vision Sensor Snapshot block will need to be used before the Vision Sensor Object Exists? block can detect any configured objects.

Choose which Vision Sensor to use.

The image shows a rounded block in a visual coding environment. The block is configured to retrieve the width of an object detected by a vision sensor labeled "Vision1." A dropdown menu is expanded below the block, showing that "Vision1" is selected from the available sensors. This block will return a numeric value representing the width of the detected object as measured by the "Vision1" sensor.

Choose which property to report from the Vision Sensor:

  • width - How wide the object is in pixels, from 2 - 316 pixels.

  • height - How tall the object is in pixels, from 2 - 212 pixels.

  • centerX - The center X coordinate of the detected object, from 0 - 315 pixels.

  • centerY - The center Y coordinate of the detected object, from 0 - 211 pixels.

  • angle - The angle of the detected object, from 0 - 180 degrees.

The image displays a rounded block in a visual coding environment, configured to retrieve a specific property of an object detected by the "Vision1" sensor. The block is set to "width" by default, and a dropdown menu is expanded, revealing additional options including "height," "centerX," "centerY," and "angle." These options allow the user to select different attributes of the detected object for the block to return, depending on the needs of the code.