Object Mapping With Romi.

Introduction

The objective of the project was to create a robot that could map out structures built in a 128 cm by 128 cm arena. These maps were to be created in two different resolutions, 14 by 14, and 7 by 7 pixels, depending on the number of segments that the current structure had. Using a variety of sensors, the robot measured and produced data on the distances of where there were panels to determine their location. The robot then created an array of data to be communicated with mapping software. This allowed the data to be visualized into a final map, which allowed us to measure the precision and accuracy of the mapping software.

Mapping is an active area of research, and one that this course had spent much time studying. If a robot knew its environment prior to deployment, that knowledge could be used to calculate trajectories to a given point and base these trajectories on certain parameters such as the known shortest route that is safe. However, if a robot was in an unknown area, it could actively map the area it was traveling, and archive that data for later use when calculating new trajectories.

Sensors That Were Used

IR Sensor

On our Romi, the IR sensor was primarily used to control the robot's proximity to the wall so that it would keep a set distance away and avoid possible collisions. This was mainly done in combination with our velocity controller to take in the ADC output of the IR sensor and convert that into a distance. This distance was then outputed to the wallfollower.cpp program, which took that distance and made velocity adjustments with a PD controller to keep bringing the sensor to the desired distance while dampening the changes of movement so that it would be smooth.

Ultrasonic Sensor

The ultrasonic sensor was the main sensor that we used to collect data and distances of the object in the center of the arena. We then converted that data into points on a map.

Filtering Map Error

For the simple cross, any value that read above 80 cm, the code did not accept the readings. If we got more than 50 valid readings, then it registered that point as an object and took those 50 readings, and used the average to determine the distance of that object. For the more complicated cross, any value above or below a set threshold was not accepted as a reading. We used an offset of 22.25 cm for each coordinate in order to start picking up the readings of the object in the center of the field. If there weren't enough readings per "block", they all got filtered out, this was to deal with reflections from the ultrasonic.

void Mapper::mapPosition2(int side, int curr_x, int curr_y, int curr_theta){
    float distance = getMedianDistance();//rangefinder.getDistance();//getMedianDistance(); //distance to whatever it sees (ultrasonic)
    distance += robotOffset2;
    int index;
    switch(side){
        case 0: 
            index = (int) (curr_x / (mapDimension/ mapLength2 + 1));
            if(index != currentIndex){
                if(numReadings > 107){
                    map2[currentIndex][(int) ((total / numReadings) / (mapDimension/ mapLength2))] = true;
                }
                total = 0;
                numReadings = 0;
                currentIndex = index;
            } else {
                if(distance < 70) {
                    total += distance;
                    numReadings++;
                }
            }
            break;
        case 1: 
            index = (int) (-curr_y / (mapDimension/ mapLength2 + 1.1));
            if(index != currentIndex){
                if(numReadings > 100){
                    map2[(int) ((mapDimension - (total / numReadings)) / (mapDimension/ mapLength2))][currentIndex] = true;
                }
                total = 0;
                numReadings = 0;
                currentIndex = index;
                if(currentIndex == mapLength2-1) currentIndex = 0;
            } else {
                if(distance < 70) {
                    total += distance;
                    numReadings++;
                }
            }
            break;
        case 2: 
            distance -= 3;
            index = (int) (curr_x / (mapDimension/ mapLength2 + 1.2));
            if(index != currentIndex){
                if(numReadings > 90){
                    map2[currentIndex][(int) ((mapDimension - (total / numReadings)) / (mapDimension/ mapLength2))] = true;
                }
                total = 0;
                numReadings = 0;
                currentIndex = index;
            } else {
                if(distance < 72) {
                    total += distance;
                    numReadings++;
                }
            }
            break;
        case 3: 
            index = (int) (-curr_y / (mapDimension/ mapLength2 + 1));
            if(index != currentIndex){
                if(numReadings > 110){
                    map2[(int) ((total / numReadings) / (mapDimension/ mapLength2))][currentIndex] = true;
                }
                total = 0;
                numReadings = 0;
                currentIndex = index;
            } else {
                if(distance < 72) {
                    total += distance;
                    numReadings++;
                }
            }
            break;
    }
}
                  
float Mapper::getMedianDistance(){
    float readings[3];
    int minIndex = -1, maxIndex = -1;
    for(int i = 0; i < 3; i++){
        readings[i] = rangefinder.getDistance();
        if(minIndex == -1 || readings[i] < readings[minIndex]){
            minIndex = i;
        }
        if(maxIndex == -1 || readings[i] >= readings[maxIndex]){
            maxIndex = i;
        }
    }
    return readings[3 - maxIndex - minIndex];
}

State Machine

In our state machine, we have a total of 7 states, 4 of them mapping each individual side of the object in the center, an idle state, a state for turning at the end of mapping a side, and finally a state to bus the matrix that was generated by mapping to the ESP32 and has that sent over to the MQTT server.

Idle State

Sets the robot into an imobile state, setting velcoities to zero and waits for button A to be pressed on the robot to start mapping either the simple cross map or the more complicated cross map and setting the romi's odometry to the determined starting coordinate.

First_side State

This is the state that mapped the first side of the object. This state first constantly updated the position of the robot's odometry using forward kinematics in combination with the Romi encoders to determine where the robot should be after a set amount of encoder counts. This was also used in order to know at what points the mapper function determined that there was an object in that space. Next, the determined distance that the robot travels away from the wall was set with the wallFollower function and should keep the Romi away from the wall at the set speed that was entered along a straight path. After that, the mapper function started acquiring the distances with the ultrasonic sensor and had that go through a filter in order to get rid of bad and inaccurate readings. The reading's somewhat accurate data that was acquired was then put through a function that took the average of a set amount of those readings and then that average determined where that object was in the space the robot was currently mapping. The next state that was switched to was the turning case once the camera connected to the Romi reached a certain height threshold for the April tag detected on the wall that it was heading towards.

case FIRST_SIDE:
      position.UpdatePose(velocityController.readVelocityLeftCM(), velocityController.readVelocityRightCM());
      if(mapType == 1){
        wallFollower.followAtVelocity(7, 3);
        mapper.mapPosition1(FIRST_SIDE, position.getX(), position.getY(), position.getTheta());
      } else {
        wallFollower.followAtVelocity(7, 3);
        mapper.mapPosition2(FIRST_SIDE, position.getX(), position.getY(), position.getTheta());
      }
      if(position.getX() > 80) {
        if(tagClose(tag1)) {
          velocityController.setVelocityCM(0,0);
          startTheta = position.getTheta();
          
          robotState = TURNING;
          afterTurn = SECOND_SIDE;
           if(mapType == 1){
            for(int i = mapper.mapLength1-1; i >= 0 ; i--){
              for(int j = 0; j < mapper.mapLength1; j++){
                if(mapper.map1[i][j] == 1){
                  String topic = "(" + String(i) + "," + String(j) + ")";
                  sendMessage(topic, String(mapper.map1[i][j]));
                  delay(250);
                }
              }
            }

Second_side, third_side and quad_side State

Very similar to the FIRST_SIDE state, the only difference being that the odometry was set at the point where the Romi should have ended up after completing its turn after the robot passed through the state afterTurn. The Romi wall followed and used the ultrasonic to map the second side. Once the odometry reached a certain point, the camera started looking for the April tag in order to stop and switch to the Turning state.

Turning state

In your turning function, our robot waited to turn until the camera sensed the average tag height. When that height reached a certain threshold, the robot entered the turning state. This state involved the Romi turning to face the next wall that it would travel while updating the odometry of the Romi and using odometry to have the Romi turn to the set theta. This case was to keep turning until the theta odometry was greater than 1.5 radians.

startTheta = position.getTheta();
      while(startTheta - position.getTheta()  < 1.5){
        velocityController.setVelocityCM(5, -5);
        position.UpdatePose(velocityController.readVelocityLeftCM(), velocityController.readVelocityRightCM());
      }


Send_matrix state

This state is what sends the matrix of the map to the ESP32.

if(mapType == 1){
        for(int i = mapper.mapLength1-1; i >= 0 ; i--){
            for(int j = 0; j < mapper.mapLength1; j++){
              Serial.print(mapper.map1[i][j]);
              if(mapper.map1[i][j] == 1){
                  String topic = "(" + String(i) + "," + String(j) + ")";
                  sendMessage(topic, String(mapper.map1[i][j]));
                  delay(250);
                } 
            }
          }

Afterturn State

This state is what resets the odometry to the point where the robot should be depending on what side it will be mapping.

if(afterTurn == SECOND_SIDE){
          Serial.println("reseting odom for map 2");
          position.setX(110); //odom reset 110
          position.setY(-15);
          position.setTheta(-3.14/2);

Conclusion

In summary, using our combination of sensors, we were able to develop a working robot that could successfully map each field to a high degree of accuracy. The data was output through the MQTT server and displayed on our map visualizer. If you want to see our Romi in action, you can view it below.

Let’s Work Together!

I will be graduating in May 2025 with a Master's degree in Robotics. I am currently seeking opportunities to apply my skills and knowledge in a professional setting. If you are interested in my background and achievements, please feel free to reach out for further discussion.