SEA.AI, the product that fills a vital gap in the maritime collision avoidance arsenal (situational awareness to cover surveillance, search and rescue capabilities) alongside radar and AIS. It relies on Artificial Intelligence (AI) to determine automatically, in a split second, whether or not something in the water ahead represents a collision threat (object of interest – men overboard). But for the AI to be efficient, it needs to be trained: quite a significant task!
The SEA.AI hardware used on board a vessel comprises a vision unit, which houses an array of cameras mounted at the masthead on a sailing yacht, or on the electronics mount of motor yacht. SEA.AI systems are capable of capturing images using low light color cameras and thermal cameras. These images are then fed to a processing unit where the AI’s algorithm relies on a massive and ever-expanding database to help it determine whether something in the water ahead represents a threat, an interest, or not. These images can also be stored so that SEA.AI’s annotation team can subsequently label them, pick out images including potential objects, and identified or unidentified them. The outputs are all then fed into the algorithm which employs deep learning to educate its AI.
This learning process has been taking place at a maximum rate since SEA.AI’s original inception in 2018 under its previous name, OSCAR. This started by identifying the most common objects such as pleasure craft and commercial vessels. Since then, image data has been collected from a wide variety of sources such as yachts competing in the Vendée Globe, Route du Rhum and other high-profile offshore racing events in France, as well as on everything from commercial vessels to superyachts and long-distance cruising yachts. With images taken during hundreds of thousands of miles of seafaring, the database now contains more than 9 million images.
The magic of SEA.AI lies in its process of detecting and identifying objects. An object is identified through ‘feature extraction’ based upon its colour, structure, shape and also its temperature (using the thermal cameras). Thus, every recorded object has its own unique ‘visual signature’. But a single object can appear very differently depending upon the angle of incidence from which it is viewed, its range, the sea state, its degree of submersion, its orientation in the water, the time of day, the weather, the amount of sunshine and its direction and inclination (and many more variables). As a result, to identify one object to a high degree of certainty, SEA.AI requires input from as many as hundreds of thousands of images.
While the passive collecting of images is very valuable, it often doesn’t provide optimal data to identify objects that can cause potentially some of the most catastrophic collisions.
One of the most frequent requests SEA.AI receives from interested people is ‘can SEA.AI detect a semi-submerged container?’ Detecting a floating container is in most cases straightforward due to its larger size compared to buoys, its rigid rectangular structure, and the temperature differential between the container and the surrounding water.
In this instance, SEA.AI’s R&D team sailed out of Brest aboard a research boat (Céladon Association) which is rigged with all of SEA.AI’s different camera models and with camera units mounted at different heights above sea level. This setup is both to provide more data for the algorithm as well as to test the AI in ‘real’ navigation situations. A mandatory step to prove that all three systems are able to identify containers, and to educate the AI for future detections.
Shaban Almouahed, one of SEA.AI R&D engineers explains: “We put a container in the water and try to acquire images from different directions, and in different lighting and sun reflection conditions, all along the day. We tried to simulate different collision scenarios to test the performance of the system. These images help us to say, ‘Now we are able to detect a container at a high percentage reliability like other objects’.”
Such tests also help in validating the detection range of various object size, but this is also dependent on the model of SEA.AI being used. For example, the Sentry, used on large vessels, has a larger detection range thanks to its near and far field cameras’ configuration, and can scan 360° around the vessel. Whereas the SEA.AI Competition has less detection range but an advantage of being small enough to be mounted at the masthead of racing boats. The range at which the Sentry will detect small objects (buoy or person overboard) in the water is within 700 m whereas for the Competition 640, used on racing yachts, similar objects are detected within 150 m ahead (based on physical modeling and real condition validation).
Data is only valuable if you can make sense of it and put it to use. This requires deep comprehension of the sea along with state-of-the-art computer vision, SEA.AI’s long-standing strength with the largest database in the world.
Photo credit for images: © POLARYSE_JC_YR