Modified on
Home Dolphin Inspired Sonar

Distinguishing Aspects of Geometric Wavar
(under construction)
PLEASE RETURN AFTER I HAVE MADE THIS INTELLIGIBLE

These geometric methods of sonar begin by organizing an array into triangulating subsets, each of an appropriate number of sensors. Each subset both detects and checks possible sources of sound.

There is a variety of alternative sub-methods for computing representations of sources of sound. One sub-variety, call it Surfaces, detects prospective hitpoints via surfaces of geometry and checks those points via a separate operation. Another sub-variety, call it Grids, searches a pre-computed grid and, in effect, combines detecting and checking into the search.

A Grids method was generating computer-animations in 2005. A Grid is predifined in front of the Device. At each point P of Grid, and for each sensor M of Device, is precomputed, and stored, the sum of the time of travel from Clicker to P plus the time of travel from P to M. A Grids method was introduced in U.S. Patent "Echo scope."

In the methods of Surfaces, representations of sources of fangs are computed by means of the geometry of conic surfaces that are themselves each determined by toas at known locations and, in case of active sonar-imaging, by the position and time of emission of a click. Surfaces methods have recently (2013) been used in simulations of Feature-Based Passive sonar.

.
.
.

"Toa" stands for Time-Of-Arrival of an instance of a Feature of a wave.
When time t0 and place C of generation of a wave (let's say, of a "click") are known then toas can be used with t0 to compute times-of-travel and, with C and known locations of sensors, to compute ellipsoids consisting of possible source-points of reflections. When t0 and C are not known, the "ditoas," Differences-In-Time-Of-Arrival," can be computed upon.
Thus, we have "Active" and "Passive" wavar-imaging.
In both, an echoscope is like this:

An echoscope has an array of sensors of waves. The waves have Features -- Hill and Valley being two possible examples.
For each sensor M, the output of M is digitized and sent to a computer. Each such output might, for each Feature F, contain instances of F, and these instances are called F-Things. So, there can be Hill-Things and Valley-Things, for examples.
A software routine called FeatureDetector does this: For each sensor M, it detects, for every Feature F, F-Things in the output of M, and records, for each, its toa, the time-of-arrival of that F-Thing.
The sensors are members of sets called Scopions. In many simulations, the number of sensors per scopion has been six.
A software routine called HitDetection does this: For each Scopion S, and for every Feature F, it computes upon the toas of F-Things in the outputs of the sensors in S, to detect hitpoints, point-representations of possible origin of those F-things.
Typically, each Scopion produces the scopic part of S, consisting of just a few hitpoints and these might be spread about in space.
But, the hitpoints of some large number of scopions might combine into a reasonable image.

More of the components of the methods of computation of images from the echoes of clicks of dolphins are to appear here. But, some must wait on the progress of patent-applications.

It seems impossible for cochleae to carry the 3D information needed for sonic imaging as done by dolphins.
However, there might might appear to exist evidence that the cochleae are used in "echolocation." Their accuracy in determining location might be as low as ours. But, the main function of cochleae in the imaging-sonar of dolphins is likely to be analogous to, if not so precise in location as, "color" in human vision.
Many experiments are yet to be run to test these ideas.
What is certain is that the simulations show that echoscopic methods will, almost surely, work in sonar-imaging devices.

doug@DolphinInspiredSonar.com