Chapter
8
Concluding Remarks
I see insect level behavior as a noble goal for artificial intelligence
practitioners. I believe it is closer to the ultimate right track than
are the higher level goals now being pursued.
R. A. Brooks, 1986
The science fiction of an artificial flying insect buzzing around your of-
fice, suddenly deciding to escape through the door and managing to reach
your colleague’s room is a futuristic scenario that this book humbly con-
tributed to bring closer to reality. The 10-gram microflyer demonstrated
autonomous operation using only visual, gyroscopic and anemometric sen-
sors. The signal processing and control were carried out entirely on-board,
despite the plane’s very limited payload of approximately 4 g. This has been
made possible by developing ultra-light optic-flow detectors and fitting the
algorithms (optic-flow detection and airplane control) in a tiny 8-bit mi-
crocontroller. The whole software running on the airplane used only a few
thousand bytes of program memory. In flight, the airplane consumed less
than 2 W, which is 30 times less than a desk light. Along a slightly differ-
ent line, the blimp permitted the use of an evolutionary technique to auto-
matically develop embedded neuromorphic controllers. This buoyant robot
required only 1 W to autonomously circumnavigate the test arena while
avoiding collisions with walls.
One of the main outcomes of this experimental exploration is the in-
sight gained about the linking of simple visual features (such as local optic-
flow or contrast rate) to actuator commands, in order to obtain efficient be-
haviour with lightweight and dynamic robots featuring limited computa-
© 2008, First edition, EPFL Press
180
What’s next?
tional resources. Typical problems arising when optic flow is used in ab-
sence of contact with an inertial frame (no odometry, unstable motion) have
been solved by merging gyroscopic information and visual input. The re-
sults of the evolutionary experiments showed that optic flow is not the only
way of processing monocular visual information for course control and col-
lision avoidance. Although the evolved contrast-based solution cannot be
generalised to other environments as easily as an optic-flow-based strategy,
it represents an interesting alternative, requiring even less computational
power.
Although the primary purpose of this research was to synthesize light-
weight autonomous flyers, the size, energy, and computational constraints
of the robotic platforms encouraged us to look at mechanisms and prin-
ciples of flight control exploited by insects. Our approach to developing
autonomous vision-based flying robots was inspired by biology at different
levels: low-resolution insect-like vision, information processing, behaviour,
neuromorphic controllers, and artificial evolution. In doing so, we have, in
some sense, contributed to the testing of various biological models, in par-
ticular with the demonstration that an artificial flying insect could steer au-
tonomously in a confined environment over a relatively long period of time.
In this regard, this book is an illustration of the synergistic relationship that
can exist between robotics and biology.
8.1
What’s next?
The natural next step with indoor microflyers consists in getting out of the
empty test arena and tackling unconstrained indoor environments such as
offices and corridors with sparse furniture. In order to reach this objective,
more attention will be required regarding the visual sensors. Two main
challenges have to be addressed. First, the light intensity variations will be
huge among different regions of such environments, some receiving direct
sun light and others only artificially lit. Second, it may happen that parts
of the visual surroundings have absolutely no contrast (e.g. white walls or
windows). If the vision system samples the field of view only in very few
© 2008, First edition, EPFL Press
Concluding Remarks
181
regions (like it is currently the case in the robots presented in this book), it
may provide no usable signals.
To cope with these issues, while maintaining the overall weight and
power consumption as low as possible, is a challenge that may be tackled
using custom-designed AVLSI
(1)
vision chips [Liu
et al., 2003] instead of
using classical CMOS cameras. This technology provides a circuit-design
approach to implementing certain natural computations more efficiently
than standard logic circuits. The resulting chips usually consume at least 10
times less power than an equivalent implementation with CMOS imagers
and digital processors. More specifically appealing for tackling the problem
of background light fluctuations is the existence of adaptive photorecep-
tor circuits [Delbrück and Mead, 1995] that automatically adapt to back-
ground light over a very broad intensity range (more than 6 decades). These
photoreceptors can be used as front-end to optic-flow detector circuits fitted
on the same chip (e.g. Kramer
et al., 1995; Moeckel and Liu, 2007). This
technology also provides the potential for widening the field of view in ar-
ranging pixels and optic-flow circuits as desired on a single chip, while con-
suming less energy and computational power than if the same functionality
had to be achieved with standard CMOS sensors and vision processing in a
microcontroller. As an example, Harrison [2003] presented an AVLSI chip
for imminent collision detection based on the STIM model (see
).
All in all, increasing the field of view and the number of optic flow de-
tectors is definitely a research direction that is worth trying. However, even
if custom-designed vision chips would allow specific arrangement of pixels,
the issue of finding a lightweight optics providing a wide FOV remains.
One solution may be to copy insects and develop artificial compound eyes.
For instance, Duparre
et al. [2004] have already developed micro-lens arrays
that mimic insect vision. However, widening the FOV while working with
flat image sensors is still an open issue with this technology.
Another approach would be to reduce the weight constraints by using
outdoor flying platforms. In that case, standard vision systems with fish-eye
lenses could be used to provide a 180
◦
FOV in the flight direction. Image
processing could be carried out on a far more powerful embedded processor
(1)
Analog Very Large Scale Integration.
© 2008, First edition, EPFL Press
182
Potential Applications
than the one used indoors. However, the payload for the required electronics
would then increase to 50 g or so. Outdoor spaces, even urban canyons,
allow for faster flight and thus increased payload, while remaining on the
safe side with respect to people or buildings.
8.2
Potential Applications
Autonomous operation of ultra-light flying robots in confined environ-
ments without GPS nor active range finders is not trivial. This book ex-
plored a novel approach that yielded very dynamic behaviour, quite far from
stabilised level flight between predefined way-points in open space, as it is
often the case with current UAVs. The proposed biologically inspired so-
lutions (from sensor suite to control strategies) are so lightweight and com-
putationally inexpensive that they very much fit the growing demand for
automating small UAVs that can fly at low altitude in urban or natural en-
vironments, where buildings, trees, hill, etc. may be present and a fast col-
lision avoidance system is required [Mueller, 2001].
The described approach could also be of great help in automating even
smaller flying devices, such as those presented in Section 2.1, which feature
the same kind of properties as our flying robots: complex dynamics due to
the absence of contact with an inertial frame, limited payload and restricted
computational power. Distance sensors are not a viable solution in such
cases, and visual and gyroscopic sensors are probably the best alternative
to provide such robots with basic navigational skills. This approach seems
to be acknowledged by the micromechanical flying insect (MFI) team in
Berkeley, which is working on a sensor suite for their 25-mm flapping
robot (
). Although collision avoidance has not yet been tackled,
preliminary work toward attitude control relies on visual (ocelli-like) and
gyroscopic (halteres-like) sensors [Wu
et al., 2003; Schenato et al., 2004].
More generally, the approach proposed in this book can provide low-
level navigation strategies for all kinds of mobile robots charactesized by
small size and non-trivial dynamics. It can equally be useful in a number
of situations where the environment is unknown (no precise maps are avail-
able) and the use of GPS is not possible. This is the case, for example in
© 2008, First edition, EPFL Press
Concluding Remarks
183
indoor environments, underwater, or in planetary exploration, especially if
the robot has to move close to the relief or in cluttered environments that
are difficult to reconstruct with range finders.
Beyond the application of vision-based navigation strategies to mobile
robots, the use of small indoor flying systems as tools for biological research
can be envisaged. It is indeed striking that a growing number of biologists
are assessing their models using physical robots (see, for instance,
et al., 1997;
et al., 1998;
et al., 2000;
, 2002;
, 2003;
, 2004;
et al., 2004). Until
now, the used robotic platforms were terrestrial vehicles (e.g. Srinivasan
et al., 1998) or tethered systems (e.g. Reiser and Dickinson, 2003; Ruffier,
2004). An indoor flying platform with visual sensors and the ability to fly
at velocities close to those reached by flies (1-3 m/s) potentially provides
a more realistic testbed, one that can be used to assess models of visually
guided behaviours in free flight. The fact that the aerial robots presented in
this book fly indoors and are small and reasonably resistant to crashes would
further ease the testing phase and alleviate the need for large technical
teams.
© 2008, First edition, EPFL Press