Interview by: Sam Bartusek
I recently spoke with Dr. Brendan Englot, researcher and assistant professor at Stevens Institute of Technology about his fascinating work in underwater robotics — his current lab develops software that makes robots more autonomous and self-sufficient, especially in the context of complex environments and uncertain conditions.
SB: Thanks so much for taking to time to talk with us. Could you start by telling me a bit of personal background? Where did you grow up and how did you get involved with robotics?
BE: I’m pretty local; I grew up in Queens, New York City. I did my undergraduate, graduate, and PhD studies at MIT, where as an undergraduate I identified that I wanted to study robotics, and then I tried to find opportunities to do that in graduate school.
It was during the process of getting accepted into graduate school and trying to find a thesis project that an opportunity arose to work on a robot that was being designed to perform an autonomous in-water ship hull inspection. It was a prototype that had been in development at MIT for about five years before that, and it was getting to a state of maturity where they needed new students to help with the algorithms and the autonomy of the robot. It had already become a pretty capable physical platform and now they were trying to make it more intelligent.
The project involved working on algorithms for autonomous navigation that would be used by that robot to inspect a ship hull, to know exactly where it is relative to the ship at all times, and ideally also detect any anomalies along the hull. In particular they were interested in minesweeping, so the goal there was to sweep the hull with sonar and with a camera and try to detect if any mines were planted on the hull. I worked on that throughout my entire time in graduate school, from 2007 to 2012.
SB: Was that your first underwater project?
BE: That was the very first marine robotics project I got involved in. All I knew at that point was that I wanted to study robotics, and I was looking for opportunities to study it in grad school. It ultimately came down to which project was the first one where funding was available, and that was the one. And then I ended up getting really immersed in that field. I grew to really like it and wanted to continue working in that area. I was largely motivated by the fact that there are such tough navigation problems underwater: sensing is tough, control is tough. I think some of the biggest challenges in autonomy are in that domain.
SB: Have you done any work in industry?
BE: Yes, after I got my PhD I spent a little time working in industry. I spent about two years after my PhD working at United Technologies, which is a large aerospace and building technologies company, looking more at the aerospace side of robotics. I was working specifically with the Sikorsky helicopter company, who makes the Blackhawk helicopter, which is widely used by the Army.
Their interest was creating technology to add to their existing helicopters that would allow them to fly unmanned if necessary. If the helicopter had to fly into a contested area, to perform a medical evacuation for example, then they could send it in without a pilot, and it wouldn’t be as bad or costly if the helicopter were to be shot down. The work there involved adding a whole autonomy pipeline — sensors, algorithms — to a helicopter, so they could fly an unmanned mission from takeoff to landing.
SB: So to dive into that minesweeping technology, how does that actually work? Could you give me some of the details of that process?
BE: This robot was designed to basically do something equivalent to ‘mowing the lawn’ — going back and forth in a big zig-zag pattern along the hull sweeping everything. In the course of doing so it would build a map, looking at the hull, and try to both a) use that information to very accurately localize the robot along the hull, and b) detect mine-shaped objects if there were any in the robot’s fields of view. The Office of Naval Research was funding this project, and they have a very large effort dedicated to mine countermeasures — both the kind that we were looking for that could have been planted on ships or structures, and also mines that might be buried on the sea-floor or that are floating. Ultimately any sort of mines that pose danger to Navy ships.
SB: Are there any civilian applications for this technology?
BE: One of the challenging things is that although you could use that technology to detect a mine, I think it would be very difficult to detect something like a structural crack. You might be able to do it with a camera, but with a sonar, you might not have the clarity or resolution you would need in order to detect something like a crack.
For civilian applications, I think if you’re interested in just getting a general idea of what’s under the water, if you want to build a map or model of the structures underwater, then it’s perfectly fine for doing that. If you want to find a needle in a haystack, it might be harder to do, depending on how large the item is that you’re looking for. The mines that were of interest in this program were primarily limpet mines, which are magnetically attached to a steel structure — they’re about the size of your wallet, so they’re large enough that even a sonar would be capable of detecting them.
SB: So how is autonomous technology better than other technologies and methods, like human divers or even animals?
BE: The predominant method is relying on human divers who do these sweeps manually. A team of divers will go down and check the ship, shining flashlights at it and trying to make sure they’ve swept the whole thing and can confirm there are no mines on it. There is also a marine mammal program that the Navy runs, where they train dolphins, sea lions and other mammals to try to perform these kind of sweeps themselves.
I don’t know to what extent that method is used in practice. I know the capability exists, I just don’t know how widely they use it. I’ve been told there are some flaws with that capability, just because sometimes you can’t be a hundred percent certain that the animal has looked at everything. The animals are good at recognizing certain shapes, and letting you know when and where they’ve found them, but I think there are also issues where they can be a little temperamental and if you have a critical wartime scenario, they might rely on human divers instead.
So the idea was to keep any living thing out of potential danger in the water by doing this in a completely automated manner. The main thing that I was contributing to that project was the motion-planning algorithm, which this robot would use when it had to look at areas where ‘mowing the lawn’ was not sufficient — specifically areas like the stern of the ship, where you have to look for a mine but you have to look above and between the rudders, the shaft, the propellers and all of the complex structures that would be harder to sweep. The prototype that was developed during this project is now a commercial product and is being produced in quantity.
SB: And now you’re at the Stevens Institute of Technology.
BE: Yes, after a stint in aerospace at United Technologies, I’m now at Stevens. I’ve been here for three years, and I’m looking more at the underwater side of things. Stevens has historic involvement in the maritime domain — it has naval and ocean engineering programs, and it has the Davidson laboratory — so there’s a wide range of actives from oceanography, climate and ocean modeling, forecasting, coastal engineering and then spanning into things like hydrodynamics, ship design and naval architecture.
The way I fit in with that is that I’m one of the folks studying underwater robotics. I’m in the Department of Mechanical Engineering, not Ocean Engineering, but I collaborate quite a bit with the researchers at Davidson lab; we do experimental testing with them.
SB: So how is the research you’re doing here different from your previous projects at MIT?
BE: One of the big lessons learned from the project that I did with MIT was that in all of the motion-planning that we were doing to inspect the stern of the ship, we were using a prior model. We were basing it on a model of the ship either that we had in advance from a CAD model or that was collected during a previous survey of the ship. In general, that sort of approach will only be as good as your model.
So what we’re doing now at Stevens is trying to develop a more robust solution to this, where we could deploy a robot to perform an inspection for which the robot doesn’t necessarily have to have any prior model. It might have no knowledge other than the boundaries within which you want it to inspect everything and then come back and produce a 3D map that the user can analyze. The approach that we’re taking now has to be able to take different sources of uncertainty into account, such as the uncertainty and noise in the robot’s sensor measurements, and the disturbances in the water that might be influencing the robot, like currents or waves. We try to go into the problem knowing that there’s uncertainty, and fold that into our solution.
SB: So how do you test the software, either in the lab or out in the field?
BE: Right now, we’re working with an ROV platform called the VideoRay, which is a widely used off the shelf inspection-class ROV. It’s used by law enforcement and the military to do hull sweeps, harbor sweeps and inspection of various public and civilian infrastructure as well. We’re trying to make it more intelligent, because currently it requires very intensive manual piloting, where a pilot is looking at a video feed and trying to steer and control the robot at a very low level from that video feed.
So we’re developing software for the VideoRay that would allow it to be dropped into an environment and given only a bounding box within which to go down and explore on its own. It will be able to map its surroundings, avoid collision and make decisions about how to explore an environment that might contain many different structures and complex obstacles that it has to avoid. We’ve been testing it out at various piers in the New York Harbor area; we have some piers near Stevens in Hoboken and Jersey City, and also some piers in Manhattan in the Hudson River where we’ve been testing it.
SB: How does the robot deal with things like currents and disturbances in the water?
BE: That’s something that the current robot is not actively combatting or modeling as it inspects, but we are currently working on solutions to those problems that we hope will be implemented within a year or two.
Some of them involve machine learning, where although the robot may have a prior model of what the currents look like — the expected conditions in the water where it will be operating — we are trying to develop algorithms that will allow the robot to learn on the fly and build a higher resolution model to plan around the conditions.
But ultimately there is a threshold where sometimes the currents may be so high that the robot cannot operate. That’s an issue that we have run into in the Hudson River. Sometimes the currents there are prohibitively high, to an extent that our robot just can’t deal with. For instance, our robot, the VideoRay at least, which we’re using as a prototype, probably could not fight a current greater than a knot or maybe two knots at most. The currents in the Hudson river can be four knots or higher depending on the time of day.
But when the currents are survivable, we would like our robot to leverage existing models and build on top of them and refine them, using learning so that as it operates it will gradually build a better and better model. That’s something that we’re looking at right now mostly in simulation.
SB: Is that somewhere where you’re trying to play catch-up in a sense? Can humans or animals already deal with currents better?
BE: Yes, that’s a good question. You could say that we’re trying to equip this robot with the same capabilities that an animal would have, to be able to sense and react to disturbances like that. Animals have some simple but very capable ways of sensing and adapting to those kind of things, and we do want our robot to be able to have that same capability.
SB: What do you see as some of the more promising intersections between technology and environmental science or climate change science?
BE: We are interested not just in building maps of what we see underwater — certainly that may be valuable on its own, to see if the structures or the topology of the seafloor are undergoing changes — but there are other types of environmental sensing that we hope a robot like this would be able to do, the same as how it might inspect and build a 3D map of the pier. We hope there could be a capability for it to sample various quantities that might be of interest, things like the salinity of the water, the temperature, the content of certain types of chemicals.
One area that is often of interest is trying to track and model the movement of oil plumes of various kinds. I think that if we equip this robot with what it needs to go in without a prior model and explore an environment and build the most complete map possible, our hope is that that can be generalizable, not just to the structure mapping that we’re doing now, but to constructing a temperature field, or a salinity field or something of that nature.
One of the trickiest parts is that those fields are changing in time as well. A good example would be that as the tides are changing in the New York Harbor, the salinity of the water is constantly changing as well. As tides comes in they force the flow of water upstream and the water becomes increasingly salty, then as the tide goes down, more freshwater is coming into the Harbor from the opposite direction and things are becoming more predominantly freshwater. In that way there’s a whole salinity gradient varying through the water and changing through time. Similarly, temperature would change as all those phenomena take place.
These kinds of things are dynamic and we have pretty good models of these things, but our hope is that when our robot goes out and interrogates a very specific region of interest, it might be able to fill in the gaps in those models.
SB: Do you have any role in developing the hardware in this project, or do you focus only on software?
BE: We mainly focus on software and we use commercially available hardware. However, what I would say is that we use combinations of hardware that are a little unconventional, because we would like our robot to have better situational awareness, to be self-sufficient compared to a robot that would be piloted by a human.
So our robot has a suite of sensors that is pretty unique that a typical VideoRay operator wouldn’t use, but our main contribution has been developing a custom software kit for this platform that allows it to, using its sonar, localize itself, build a 3D map and then make decisions and explore an unknown environment.
Our hope is that if we make this software freely available, which we intend to do (some of our individual libraries that comprise all this have already been released), we hope others will adopt it and we can kind of converge toward there being more commonly used robot platforms in this domain, like there have been with aerial drones and other domains where the community of users all adopt the same platform and they develop software for it and make progress faster.
That has happened with this technology with some hobby ROVs that have come out recently: there’s one called the OpenROV and one called the BlueROV. But none of those have acoustic sensors on them, they just have cameras. And in the type of water that we have around here, very turbid, murky water, it’s hard to see more than a few inches with a camera. So we are trying to put forth a suggestion for what kind of sensors and what kind of robot you would need for that kind of thing to achieve better situational awareness than you could get with a camera.
SB: Are there any barriers that you could think of that stand in the way of a more widespread or successful implementation of the software?
BE: One of them I think is just the cost of the type of sensors. The cost of sonar for example is prohibitively high for a hobbyist to get involved. Today, a hobbyist could buy a DJI Phantom aerial drone for a few hundred bucks, and do a lot of interesting testing and development with that, but if you really wanted to do large scale mapping with your own underwater robot, you wouldn’t be able to obtain a sonar for less than five figures, so at the moment it’s a costly enterprise to get involved in.
I think that’s one barrier to entry — it’s a reason why hobbyists and makers and people who are interested in playing around with these platforms don’t have the same access that they do in other areas. But in the future, as the community of users and customers grows larger and the sensors become more widely adopted that may improve.
SB: To wrap up, what do you think your personal future in the field is going to look like? Do you think that you’ll stay in academia, or that you might move into industry?
BE: I’m pretty happy in academia for now, and I think there are still very challenging problems in underwater autonomy where universities are in a strong position to make contributions. That may be changing for other sectors, like the unmanned aerial robotics space and self-driving cars, which are spaces where you’re starting to see industry kind of take over.
I think at the moment the economic proposition is not favorable enough for industry to get as involved in this space as they have been in others and there are still a lot of really interesting things we, as universities, can do to contribute to the toughest problems in this space, so it certainly interests me for now.
That being said, we are always seeking collaborators from within industry — talking to other people developing these robots, whether they’re for scientific operations, for the oil and gas industries or other industries. There are certainly many groups in industry that are thinking about how to bring more autonomy into this domain, it’s just that there are a lot of tough problems and not yet a single solution everybody’s converged on about how to solve them.
SB: Thank you so much for talking!
BE: Great, thank you.