Avoiding Collisions with Autonomous Mobile Machinery

By Tim Conklin and Dr. Thorsten Kever, ifm

The science of machinery avoiding obstacles is nothing new. Early automobiles added rear view and side mirrors to try and reduce the problem, but even as recently as 1906, Indianapolis 500 drivers had a mechanic who rode onboard — whose secondary job was to check where all the other participants were. In the future, can we expect to see self-driving construction machinery?



At today’s modern construction site, safety is paramount. Whether it’s a driver who is reversing or an excavator executing a turn, serious accidents involving workers on construction sites are a reality. Thus, when working with heavy machines, visibility for the operator is very important.

One consideration is to make sure that the machines are equipped with appropriate technology, in order to allow the driver to operate the machine in a safe way. If possible, direct line of sight for the operator is preferable. If not, then you can equip a camera monitor system — but the monitors should be in direct line of sight for the operator. On a wheel loader, for example, it can’t be blocked by the bucket in front.

The trend is clearly moving towards active assistance systems. In this case, active means you’ll have some kind of sensors that can detect persons or obstacles by themselves and can actively warn the operator, visually or acoustically. It can even block the machine or stop the machine if the operator is somehow not up to the task. This is where autonomous construction machinery comes in. 

Different Levels of Autonomy

Autonomy

There are different levels of autonomy, level one through level five. At level zero, you don’t have any assistance, all the way up through the machine being able to drive itself at the push of a button at level five. There are some assistance systems that fall under level zero, as long as there’s no intervention into the vehicle itself. An example would be a system in a car with an ultrasonic sensor for back reversing — but it’s not braking the car. It is just showing something on a screen or beeping.

Or consider a wheel loader in a mine. Here you could have 3-D and 2-D sensors that work for the operator in a way that s/he’s acoustically warned. S/he has a monitor with live feed from behind the vehicle. In addition, it is actively measuring with the 3D sensor for objects, so the operator is warned acoustically and highlighting within the video screen. But the machine has not stopped, so it’s still the responsibility of the operator to control the vehicle. The assistance system is merely a help for him/her.

As you move toward more complex systems, the driver assistance would interact with the machine by braking, for example. The responsibility is still on the human driver. For example, on compaction rollers, the system is connected to the brake with three levels of reaction. One is the warning, acoustical and visually. The second means the machine is going to travel slower and the third means it is stopped if no reaction by the operator happens.

It's always a bit tricky with an assistance system that interacts with the machine — it must be reliable. If it has too many detections without cause, that will hinder efficiency. With critical cases, like something small approaching the vehicle from the side, it must react — but in other cases, you might need to pass very close by a wall. Thus, the system must be very precise, so you’re allowed to drive by but are still able to react to a obstacle.

Beyond that, you can have the system influence not only braking but also steering. One user in France manufactures a vehicle for grape harvesting. The operator must drive the vehicle along the grape vines while controlling the machine. The vines are shaken to remove the grapes, and the operator has to control the shaking. Here, the steering is taken over by the machine based on the 3D sensor’s output, so it's an assistance system that allows him/her to concentrate on the control of the machine, to be more efficient in how the grapes are harvested.

For higher levels of the autonomy, such as self-driving construction equipment, the system may be completely responsible, or you may use driving modes that are fully autonomous. Here, you don’t need an operator at all.

3D Camera Technology

ifm’s key technology is a 3D camera sensor where we measure Time of Flight (ToF), based on infrared light bouncing off the object and back. We do that calculation internally, then the sensor conducts post-processing onboard and sends obstacle data or other data, such as steering, into the control system, so there's limited downstream processing. It mostly happens internal to the sensor.

This technology is basically the same principle as Lidar; we are actively sending out a signal — in this case infrared light — and if it hits an object, it’s reflected back — so you can measure the time it needs to go to the object and back. Out of this you get the distance and can render a 3D image of your surroundings.

pmd Technologies, a daughter company of ifm, designs and manufactures the actual chipsets for this technology. ifm takes these chipsets and puts them into sensors and cameras and develops all the software around it to sell the application in different markets.

Rather than sending out a single pulse signal, pmd uses a long signal that has a kind of modulation, which repeats at some specific frequency, mostly between 5 and 100 megahertz. Looking at the phase shift requires much less bandwidth, thus allowing us to build 3D images with good resolution.

Using a phase-based approach for ToF allows the technology to easily scale in size, range and cost. The consumer market has taken advantage of the price and size scalability through its use of pmd technology in mobile phones. The industrial market utilizes size scalability to provide solutions for mobile robots of all sizes. Finally, vehicles in industries like municipal, construction and agriculture utilize the range scalability to offer obstacle detection solutions at speed.

The ToF technology also enables key advantages for dynamic environments. By measuring exact distance, the system is not affected by changing light conditions, like darkness or direct sunlight. Additionally, the color or shape of the detected object does not influence the signal.

In the consumer market, mobile phones with this technology will be able to measure room sizes, to calculate how much paint to buy, for example. Robots in warehouses and grocery stores rely on pmd to analyze the size and shape of a box on a shelf, to automate picking or restocking. Further industrial use cases include automated baggage handling in airports, augmented reality, and robotic stacking or unstacking of boxes on a pallet. All of these applications require high accuracy, with fast processing time, without any loss in reliability.

Robust Sensing at the Worksite

For mobile applications and automated heavy equipment at construction sites, a system has to be very robust — they’re very harsh environments with a lot of dirt, vibrations, shock, and extreme temperatures. This system also works with the CAN “network” to simplify integration to the equipment. The system allows further simplification, as the data is not sent to an external processor but rather all evaluation is done onboard.

This technology can be used on garbage trucks, wheel loaders, compaction rollers and most any rugged mobile machine. The sensors measure in 3D and this real-time data capturing is calculated and calibrated, then filtered and grouped. By reviewing the pixels in the software, the system can see the objects and then track those objects. This allows us to see where the objects are, their distance, position, height, size and movement.

Movement is a key data point — is the object stationary or is it moving, and if it is moving, is it moving into or out of my way. The system reviews the velocity and the turning of the vehicle, then puts this information together to see where the vehicle is moving and where the object is moving. This allows us to provide intelligent predictions for a potential critical collision. This is where the decision point comes in, whether a brake is necessary, for example.

For example, on a garbage truck that is going backward with a constant velocity, a sensor picks up an object. This object is tracked but it is determined the situation is not critical because it calculated time to collision, which is, at the beginning, rather large. Without a reaction, however, the distance becomes smaller so the situation can become critical. That is when the system will trigger an acoustical signal and optical and then wait. If the operator does not react, there is a second signal to the brake directly and the person would stop or the vehicle would start braking so we reduce the velocity, reducing the time to collision and hopefully, avoidance of the object.
Active assistance systems are growing on mobile machinery, using sensors to detect objects or persons that can cause collisions. These systems trigger alarms and even control system functions such as braking to ultimately make work sites safer.

About IFPE

The International Fluid Power Exposition (IFPE) is the leading exposition and educational resource dedicated to the integration of fluid power with other technologies for power transmission and motion control applications.

Held every three years, the next IFPE is set for March 14-18, 2023 at the Las Vegas Convention Center in Las Vegas, USA. IFPE is co-located with CONEXPO-CON/AGG, one of the world’s largest gatherings for the construction industries. Subscribe now to get show updates and offers.

For the latest show information and updates, sign up for 
show alerts.

 

Read Next

Enabling Electrohydraulic Automation of Machine Functions