Aivisión#
El sensor de visión de IA debe estar conectado a su V5 Brain y configurado en VEXcode V5 antes de poder usarlo. Para obtener información sobre Introducción al sensor de visión de IA con VEX V5
Consulte estos artículos para obtener más información sobre el uso del sensor de visión IA.
Para obtener información más detallada sobre el uso del sensor de visión de IA con bloques en VEXcode V5, lea Codificación con el sensor de visión de IA en VEXcode V5 C++.
Inicializando la clase aivision#
Un sensor de visión de IA se crea utilizando uno de los siguientes constructores:
The aivision(int32_t port)
constructor creates an aiVision
object in the specified port.
Parámetro |
Descripción |
---|---|
|
Un Puerto inteligente válido al que está conectado el sensor de visión de IA. |
// Create a new AI Vision Sensor "aiVision" with the aivision class.
aivision aiVision = aivision(PORT1);
The aivision(port, desc, …)
constructor uses two or more parameters:
Parámetro |
Descripción |
---|---|
|
Un Puerto inteligente válido al que está conectado el sensor de visión de IA. |
|
El nombre de uno o más objetos colordesc, codedesc, tagdesc o aiobjdesc. |
// Create a new Color Signature "Red" with the colordesc class.
aivision::colordesc Red = aivision::colordesc(1, 207, 19, 25, 10.00, 0.20);
// Create a new AI Vision Sensor "ai_vision" with the AiVision
// class, with the "Red' Colordesc.
aivision aiVision = aivision(Ports.Port1, aiVision__Red);
This aiVision
AiVision object and aiVision__Red
Colordesc object will be used in all subsequent examples throughout this API documentation when referring to AiVision class methods.
Clases relacionadas
Métodos de clase#
takeSnapshot()#
The takeSnapshot
method takes a picture of what the AI Vision Sensor is currently seeing and pulls data from that snapshot that can then be used in a project.
Al tomar una instantánea, se almacenarán todos los objetos detectados que especificó en la instancia del sensor de visión de IA. Por ejemplo, si desea detectar una firma de color “Azul” y el sensor de visión de IA detecta tres objetos azules diferentes, los datos de los tres se incluirán en la matriz.
The takeSnapshot(desc, count)
method takes the current snapshot visible to the AI Vision Sensor and detects the objects of a specified object description.
Parámetros |
Descripción |
---|---|
descripción |
|
contar |
Opcional. El número máximo de objetos a obtener. El valor predeterminado es 8. |
Devuelve: Un número entero que representa la cantidad de objetos encontrados que coinciden con la descripción pasada como parámetro.
while (true){
// Take a snapshot of the red objects detected by
// the AI Vision Sensor.
aiVision.takeSnapshot(Red);
// Clear the screen/reset so that we can display
// new information.
Brain.Screen.clearScreen();
Brain.Screen.setCursor(1, 1);
// Print the largest detected object's CenterX
// coordinate to the Brain's screen.
Brain.Screen.print("Object Count: %d", aiVision.objectCount);
// Wait 0.5 seconds before repeating the loop and
// taking a new snapshot.
wait(0.5, seconds);
}
objects#
The objects
method allows you to access stored properties of objects from the last taken snapshot.
Propiedades disponibles:
id
centerX
andcenterY
originX
andoriginY
width
andheight
angle
exists
score
To access an object’s property, use the name of the AI Vision Sensor, followed by the objects method, and then the object’s index. For example: aiVision.objects[0].width
identificación#
The id
property is only available for AprilTags and AI Classifications.
For an AprilTag, the id
property represents the detected AprilTag(s) ID number.
For AI Classifications, the id
property represents the specific type of AI Classification detected. For more information on what IDs AI Classifications have, go to this article.
To call the id
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
Nota: Las etiquetas AprilTags se ordenan por sus ID únicos en orden ascendente, no por tamaño. Por ejemplo, si se detectan las etiquetas AprilTags 1, 15 y 3:
AprilTag 1 está en el índice 0.
AprilTag 3 está en el índice 1.
AprilTag 15 está en el índice 2.
To call this property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].id
.
centroX y centroY#
The centerX
and centerY
properties report the center coordinates of the detected object in pixels.
To call the centerX
or centerY
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call a property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].centerX
.
En este ejemplo, debido a que el centro de la vista del sensor de visión de IA es (160, 120), el robot girará a la derecha hasta que la coordenada centerX de un objeto detectado sea mayor a 150 píxeles, pero menor a 170 píxeles.
while (true) {
// Get a snapshot of all Blue Color objects.
aiVision.takeSnapshot(aiVision__Blue);
// Check to make sure an object was detected in the snapshot before pulling data.
if (aiVision.objectCount > 0) {
if (aiVision.objects[0].centerX > 150.0 && 170.0 > aiVision.objects[0].centerX) {
Drivetrain.turn(right);
} else {
Drivetrain.stop();
}
}
wait(5, msec);
}
origenX y origenY#
The originX
and originY
properties report the coordinates, in pixels, of the top-left corner of the object’s bounding box.
To call the originX
or originY
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call a property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].originX
.
En este ejemplo, se dibujará un rectángulo en la pantalla del Cerebro con las medidas exactas del cuadro delimitador del objeto especificado.
while (true) {
// Get a snapshot of all Blue objects.
aiVision.takeSnapshot(aiVision__Blue);
Brain.Screen.clearScreen();
// Check to make sure an object was detected in the snapshot before pulling data.
if (aiVision.objectCount > 0) {
Brain.Screen.drawRectangle(aiVision.objects[0].originX, aiVision.objects[0].originY, aiVision.objects[0].width, aiVision.objects[0].height);
}
wait(5, msec);
}
ancho y alto#
The width
and height
properties report the width or height of the object in pixels.
To call the width
or height
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call a property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].width
.
En este ejemplo, se utiliza el ancho del objeto para la navegación. El robot se acercará al objeto hasta que alcance un tamaño específico antes de detenerse.
while (true) {
// Get a snapshot of all Blue objects.
aiVision.takeSnapshot(aiVision__Blue);
// Check to make sure an object was detected in the snapshot before pulling data.
if (aiVision.objectCount > 0) {
if (aiVision.objects[0].width < 250.0) {
Drivetrain.drive(forward);
} else {
Drivetrain.stop();
}
}
wait(5, msec);
}
ángulo#
The angle
property is only available for Color Codes and AprilTags.
Esta propiedad informa el ángulo del Código de color o de la Etiqueta de abril detectado.
To call the angle
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call this property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].angle
.
En este ejemplo, el ángulo de AprilTag se imprime en la pantalla del cerebro.
while (true) {
// Get a snapshot of all AprilTags.
aiVision.takeSnapshot(aivision::ALL_TAGS);
Brain.Screen.clearScreen();
// Check to make sure an object was detected in the
// snapshot before pulling data.
if (aiVision.objects[0].exists == true) {
Brain.Screen.print(aiVision.objects[0].angle);
}
wait(5, msec);
}
existe#
Esta propiedad devuelve un valor booleano que indica si el objeto especificado existe o no.
To call the exists
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call this property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].exists
.
En este ejemplo, el robot verifica si se detecta una AprilTag antes de imprimir su ángulo en la pantalla del cerebro.
while (true) {
// Get a snapshot of all AprilTags.
aiVision.takeSnapshot(aivision::ALL_TAGS);
Brain.Screen.clearScreen();
// Check to make sure an object was detected in the
// snapshot before pulling data.
if (aiVision.objects[0].exists == true) {
Brain.Screen.print(aiVision.objects[0].angle);
}
wait(5, msec);
}
puntaje#
The score
property is only available for AI Classifications.
Esta propiedad devuelve la puntuación de confianza de la clasificación de IA especificada. La puntuación oscila entre el 0 % y el 100 %, lo que indica el nivel de certeza del sensor de visión de IA en su precisión de detección.
To call the score
property, a snapshot must be taken using the aiVision.takeSnapshot
command. The array is sorted by object area in pixels, from largest to smallest, with indices starting at 0.
To call this property, use the objects
method followed by the index of the detected object to pull the property from. For example: aiVision.objects[0].score
.
objectCount#
The objectCount
property contains the number of objects found in the most recent snapshot.
Devuelve: Un número entero que representa la cantidad de objetos encontrados en la instantánea más reciente.
while (true){
// Take a snapshot of the red objects detected by
// the AI Vision Sensor.
aiVision.takeSnapshot(Red);
// Clear the screen/reset so that we can display
// new information.
Brain.Screen.clearScreen();
Brain.Screen.setCursor(1, 1);
// Print the largest detected object's CenterX
// coordinate to the Brain's screen.
Brain.Screen.print("Object Count: %d", aiVision.objectCount);
// Wait 0.5 seconds before repeating the loop and
// taking a new snapshot.
wait(0.5, seconds);
}
tagDetection()#
The tagDetection(enable)
method enables or disables apriltag detection.
Parámetros |
Descripción |
---|---|
permitir |
|
Devoluciones: Ninguna.
colorDetection()#
The colorDetection(enable, merge)
method enables or disables color and code object detection.
Parámetros |
Descripción |
---|---|
permitir |
|
unir |
A boolean value which enables or disables the merging of adjacent color detections. The default is |
Devoluciones: Ninguna.
modelDetection()#
The modelDetection(enable)
method enables or disables AI model object, also known as AI Classification detection.
Parámetros |
Descripción |
---|---|
permitir |
|
Devoluciones: Ninguna.
startAwb()#
The startAwb()
method runs auto white balance.
Devoluciones: Ninguna.
set()#
The set(desc)
method sets a new Color Signature or Color Code.
Parámetros |
Descripción |
---|---|
descripción |
Devoluciones: Ninguna.
timestamp()#
The timestamp()
method requests the timestamp of the last received status packet from the AI Vision Sensor.
Devuelve: Marca de tiempo del último paquete de estado como un entero de 32 bits sin signo en milisegundos.
installed()#
The installed()
method checks for device connection.
Returns: true
if the AI Vision Sensor is connected. false
if it is not.