Amber Callan | Conceptual Geological Mapping Device
427
portfolio_page-template-default,single,single-portfolio_page,postid-427,qode-quick-links-1.0,ajax_fade,page_not_loaded,,vertical_menu_enabled,side_area_uncovered_from_content,qode-theme-ver-11.1,qode-theme-bridge,wpb-js-composer js-comp-ver-5.1.1,vc_responsive
 

Date of Project: January 2013 – May 2013
Class: Non-Traditional Interfaces

Working in a partnership, I helped design a conceptual device that would help users in the oil industry acoustically map oil deposits beneath the surface through an imaging analog of acoustically mapping a dark room. Our device used a customized emitter, a multi-touch gesture interface and a interactive 3-D room map. Because this was done three years ago, I’m in the process of re-vamping these designs with the technical knowledge of Adobe CS that I know now. Stay tuned for that!

This project was presented to two Shell Oil Geologists using a WOZ interface design.

When geologists go out into the field looking for oil, gas and other natural resources, they need to have a way to look beyond the ground before them. Layers of rock, water and soil cover most of the resources they seek. Firstly, geologists need to see if these resources are present underground and then they need to determine whether drilling is possible, profitable and safe. Once these facts are checked, they eventually need to create a plan for drilling because challenges arise with all rock types, including sticky clay, shale, hard carbonates and salt domes. Using sonograms to penetrate the visible ground layer results in a visual representation of these layers that becomes basis of finding faults and identifying stratigraphy before drilling can occur.

Most geophysical experiments involve sending sound waves directly into the earth and recording the echoes using some sort of explosion. These sound waves will bounce off of different layers and boundaries until they return to the receivers on the ground. Then an image is constructed based off of the data recorded. Geologists used to draw on this one-dimensional output (pictured below) to analyze the ground beneath them. Now computer interfaces have changed, but the pictures created by these sonograms have not drastically changed. In fact geophysicists at Shell Oil have said that the methods have not changed since thirty years ago nor has the usefulness of the three-dimensional image. Because of their poor resolution and quality, the three-dimensional images constructed nowadays still leave much up to interpretation and discovery. Therefore it is important to look back at this problem and the technology of sonograms to create a better system for collecting the information and analyzing the three-dimensional image produced.

 

Basic echogram of the earth (Shell Oil)

Basic echogram of the earth (Shell Oil)

 

While the world of geophysics may seem extremely daunting, we can take this problem and apply it to a smaller, more understandable scale. For example, imagine you are standing on a well-lit stage with a pitch black room before you. The room is filled with doors, stairs, furniture and partial walls that you need to see around and behind. The wall behind the stage is covered in acoustic receivers and the goal is to create a sound emitter that you can knowingly manipulate to yield a model of all the objects in the room. Relating our example back to the real world, the wall of receivers along with the stage represent the ground we can see while the dark room before us represents the unknown ground below. The unique objects in the room symbolize different elements we are looking for in the ground like salt domes and oil. Using this metaphor, we can create an emitter for the room and then apply it to our real world problem.

 

Basics of the problem

The speed of sound changes based on the material it is emitted into or passes through

The speed of sound changes based on the material it is emitted into or passes through

There are two basic elements to the problem. One is the problem of using sound to map an unknown area and the other is using this information to create a useful three-dimensional image for the geophysicists studying the earth.

Before solving problem one, we have to understand sound and its applications. Therefore, we look to the most basic application of acoustic mapping; sonograms and ultrasounds. In both of these methods, an emitter sends out a pulse of sound which travels through the surrounding space. If the sound wave is reflected by an object, it becomes distorted and gets fuzzier. Then when it travels back to the wall of receivers, the impurities identify this sound as an echo (the fuzzier the sound is, the more times it has bounced). Next, because we know that the sound wave must have traveled twice the distance from the emitter to the object (x) in a specific time (t), we use the simple equation X= s * t/2 to figure out how far away an object is. Knowing the speed of sound also plays a very important role in identifying objects mainly because the speed of sound will differ depending on the material it goes through. In Table 1, several values are listed for the speed of sound to demonstrate that as sound travels through different mediums, its speed will change. Therefore, recording the speed of sound will be able to tell us where edges are and where sound has gone through an object.

Knowing these two basic principles, the receivers against the back wall of the stage will record sound response and identify several qualities of the sound including the pitch, intensity and position. Using the speed of sound alone, the pitch will tell us how far away an object is from our system. The intensity will tell us the size and density of an object and the delay of sound received will tell us the position of the object in relation to our system.

While mapping an object from one perspective may seem easy, seeing behind an object

proves to be very difficult and this is where the true problem lies. In our real world problem, geologists cannot clearly see the lower boundaries of salt bodies in the ground using a basic sonogram. This shadow underneath the salt body can hide oil and other key findings from being shown in the model and from being discovered before drilling. Drilling around salt bodies improperly can have very serious consequences such as damage to a drill bit, causing a hole to close, and getting tools stuck in the ground (Dussealt). Pictured below the models on the left hand side represent a simple sonogram. The quality of these images is low while you cannot really see what is underneath the salt domes. They form a sort of shadow, hiding whatever is under them. In our system, partial walls represent these salt bodies, covering what lies behind them.

 

On the left side, the speed of sound is recorded to map the ground below while on the right side, the same field is mapped with speed and velocity of sound.

On the left side, the speed of sound is recorded to map the ground below while on the right side, the same field is mapped with speed and velocity of sound.

 

To get around these salt bodies, we need to be able to record sound through them and see how its speed changes. Therefore, we need to record the velocity of sound in addition to the speed. While speed is a scalar quantity and can tell us how fast sound is moving, the velocity of sound is a unique vector quantity that tells us the rate at which sound changes its position (Physics Classroom). Therefore by also recording the velocity of sound, we can essentially see through and behind objects. On the right side of the image above, a more comprehensive image is formed using velocity in addition to the speed of sound. It shows the lower boundaries of these salt bodies and will show the back of objects in our room.

Next, creating the actual image tends to be a problem for several reasons. First of all, the process of creating this image is extremely serial. A detailed three-dimensional image requires a full range of sonograms before it can be completed. Therefore, the emitter and receiver set need time to send out and receive sounds for the entire room before a user can view said image. This greatly limits the possibilities for an interface because it will not be able to provide complete simultaneous feedback for the entire room at once. The emitter and receiver set will unfortunately take time to compile the data. Next, creating this complete three-dimensional image requires many data points to draw information from. This means that sonograms must be taken from a variety of heights and angles to create an accurate, detailed representation of all the objects in the room. The emitter must also be able to focus the scope of sound it releases while changing its range as well. Lastly, the geologist has to be able to locate and view elements of the image that they deem important like soil type, rock and water conditions (IHRDC).

To combat the serial nature of the sonogram collection, users will be able to see the formation of the three-dimensional image. This will give them some idea of how the system is working and it will show them what the emitter has mapped and what it has left to map. As an added time saver, a preliminary scan will occur first so that a comprehensive image can be formed. The image will not have the detail or full range but gives the user an idea of what is there and gives the user an opportunity to zoom in on specific features they have already identified.

Next in order to collect many data points, we have to build an emitter that can handle a variety of angles, lengths and heights. This requires many specifications on where the emitter aims and how it releases sound, described in more detail below. Finally, we need to ensure we give the geologists a way to identify between object types so they can look at what they want in the final three-dimensional image. Therefore incorporating a system of densities and identifications for different layers makes the system more useful and the image more knowledge filled. All in all, finding solutions to each of these problems and keeping them in mind would ultimately yield a better three-dimensional model than programs were previously able to create.

The Emitter: Specifications and Input

In order to tackle the issues mentioned above, we first needed to create a new emitter to send out the proper sound for the new set of data we want collected. First of all, the sound needs to be of an extremely high frequency. This will ensure that it penetrates the variety of objects in the room, that it will not be annoying to constantly hear and that it will provide greater imaging resolution (General Electric Company). Then, in order to create a full set of data points, we will need the emitter to direct sound in multiple directions and at multiple lengths. Therefore, we have incorporated this in two parts.

 

Modeling with spirals vs. X modeling and sweeping

Modeling with spirals vs. X modeling and sweeping

 

First off, the emitter will travel horizontally and vertically while spiraling. This not only covers the y-axis and x-axis but covers a bit of the z-axis as well thanks to the angles. As shown in the image above, spirals will be able to cover more points in the room and build a comprehensive three-dimensional model. For free rotational purposes, the emitter will sit on a ball-and-socket joint. Then for movement in the y-axis, this emitter will be attached to a lift (such a scissor lift) in order to reach different heights in the room. Lastly for movement in the x-axis, the emitter will be part of a motorized system on wheels. Because the emitter will be on a stage, we realize that it could potentially run into walls or fall off of the stage. Therefore the six sides of the square emitter will be equipped with sensors to detect the distance between the device and the boundaries. On the four sides and top, there would be a basic laser sensor to detect the distance. However, the sensor on the bottom will be more extensive and looking at the area surrounding the base of the emitter device (pictured below). It will look for a change in height to prevent the emitter from falling over and getting damaged.

 

A rudimentary model of the foldable, mobile emitter complete with sensors and the speaker

A rudimentary model of the foldable, mobile emitter complete with sensors and the speaker

 

Next, we ensured that the emitter could record data from a variety of lengths. Changing the length of the sound emitted will be able to give us a better idea of the position of objects within the room. If an object is not getting hit with sound until the ten meter mark, then we know how far away it is from our system with greater accuracy. Later on, we allow the user to specify a spot in the room to scan with our emitter. Being able to emit sounds around that distance will come in handy for this purpose.

Then once we send out the sound, we have to receive it and process it. Thanks to the wall of acoustic receivers provided, we just need to specify the qualities of sound we are looking for. First of all, we will use a hierarchy of sounds meaning that a clear sound with no defects indicates a primary bounce while a fuzzier sound with defects indicates a secondary or tertiary bounce. Therefore, the first peak records the energy of this initial emission the next peak without defeats shows where the back wall is located. Then as mentioned above, we will use pitch, intensity and position to discover distance, density and location.

Lastly, the receivers will also pick up the velocity so our system can use the velocity model described above as a way to “see around” objects. Then once the receivers pick up the speed and velocity of sound from several points around the room, a computer program will be able to use this data in conjunction with several mathematical formulas and create a three-dimensional models.

User Interaction

Once the three-dimensional images are constructed, we needed to decide how a user was going to look at and interact with them. During our creative process, we considered several possibilities but decided between a locomotion interface or a gesture interface. The most obvious difference between these two would be how the user sees the room. On one hand, the locomotion interface would give the user the ability to see objects in real size and up close. The user could walk through and see how objects were laid out in real space. On the other hand, a gesture interface incorporating a screen can have users zoom, manipulate and examine the space. In making our decision between these two, we first considered the navigation through the room. At first, a locomotion interface seemed to be the most intuitive because you could walk through the space that the model created. However, knowing that we could not yield an instantaneous three-dimensional image, we thought having people walk through a room while it was being constructed would introduce locomotion sickness since the user would have to adjust to the lag in the virtual world. Using a gesture interface, a person could swipe through the room quickly seeing it from every angle and they could watch as the room took shape. We also thought about the size of the visual. Both systems incorporate a visual component whether on-screen or around the user. While a locomotion interface gives a larger, more detailed area, it becomes more difficult to navigate through quickly or be able to see the larger picture. A gesture interface may be smaller and less detailed, but it provides an easy way to see the big picture and then can give the user the chance to zoom in and look at details.

After looking at the pros and cons above, we decided that the benefits of a gesture interface outweighed those of a locomotion interface. From there, we decided that in-air commands would be too awkward and inaccurate for this system. Therefore, we created an interface with six basic gesture commands on a touchscreen that can begin scans and look at the three-dimensional images. We also decided a touchscreen could be beneficial for the variety of systems as well. It could be used on anything from smartphones to iPhones to tablets and computers. However, we did design for something with a smaller screen. Considering the size and bulk of the equipment used to create sonograms of the darkened room, a controller that was small, lightweight, and powerful made sense.

 

The simple gesture set for our interface

The simple gesture set for our interface

 

For gesture commands, we give the ability to perform the commands listed below and illustrated above:

  1. Rotate the 3D image (two fingers clockwise and counter clockwise)
  2. Zoom in on the 3D image (a pinch motion)
  3. Panning within the 3D image (a swipe motion)
  4. Tilt the 3D image (two fingers in an up or down swipe motion)
  5. Scanning for more detail (a press and hold motion)
  6. Select button (a simple tap motion)

With this set of commands a user can get feedback that would normally be impossible in real life. Presumably a user would be able to navigate through the room as they would in a locomotive interface, with the added bonus of being able to zoom in and out and change angles quickly. The freedom to move around in the 3D model would allow the user to choose specific areas to view and judge whether or not that area should be sonogrammed further for more detail in the model. Learning the gestures to operate the interface would take a short amount of time as similar commands exist in other widely used navigational software such as Bing and Google Maps.

System flow and features

A basic walk through of the system with a dot indicating the status of the emitter, a history of previous scans and the gradient symbolizing the density of objects.

A basic walk through of the system with a dot indicating the status of the emitter, a history of previous scans and the gradient symbolizing the density of objects.

 

First of all, the user would open an application on their smartphone, tablet or computer. This would load a start-up screen with options that include “Start Scan” and “View Past Scans” (panel 1, pictured above). From there, you either begin a new scan of the room or you can look at scans you have already taken.

If you decide to start a new one, you simply press the button and you’ll begin to see the dot signalling emitter activity turn green as the emitter begins taking a quick scan of the room (panel 2, pictured above). The user receives feedback from this dot in the upper corner to tell them the status of the emitter. Red means that the emitter is not in use. Yellow means that the emitter is waiting or on standby while green means the emitter is busy working. Then the system begins to compile data from the sonograms and starts to build the three-dimensional model of the room. The user can see the model being built so they have an idea of where the emitter is looking and how long it will take to complete.

Once a basic scan is compiled, the dot turns yellow and the system waits for further instruction. From here, you can either manipulate the image by tilting it, rotating it or zooming. However, you can also identify a point of interest and ask for more information on that point by double tapping and holding your finger down. This brings up a target symbol that you can place wherever you want more detail (Panel 4, pictured above). Then the emitter light will turn green as the system records sonograms to map the area indicated. Lastly, you can save the scan to your history and look at other old scans (Panel 6, pictured above).

 

Conclusion

Often, we think that the only way to know what is in front of our faces is to use our eyes, but our other senses can sometimes be enough to form an accurate display. Geologist figured this out decades ago and have since then used sound to “see” what is embedded in Earth’s crust. As time passes, technology and science advance and with it so do the methods that the geologists and geophysicist use to map the ground and its resources. These methods provide for clearer, and more productive images and models produced by using sound waves to map the Earth. It is logical that following the production of better models comes the production of better interfaces.

The use of computer GUI’s is dependable, but there are better options. While it might seem that gesture interfaces are very basic they provide several advantages. Most noticeably is the abundance of technology available that supports gesture interfacing. Also, while a computer GUI is limited to desktop or laptop use paired with a mouse and keyboard, gesture multi-touch interface devices such as smartphones, iPhones, tablet, iPads, and Microsoft Surface are portable, require only the use of two fingers, and the data is always accessible since it is most commonly saved through a cloud service.

For our goal of mapping a dark room with complete object models, a gesture interface device provided the control that we needed while also allowing the user to view the progress being made. The added ability to allow for specified location mapping after assessing which parts of the 3D model required more detail was a way to make the sonograms more efficient. Instead of scanning the entire room for full detail we made sure we’d only do so when necessary. Seeing as how our dark room was analogous to the Earth’s crust, making sure we use our time and resources efficiently might not mean much for us, but it could several thousand dollars worth of work to geophysicists.

As technology keeps advancing the need to reexamine the interface for the Dark Room Problem will surely come up. After going through the steps of designing the interface for this issue, a clear next step would be to use holographic interfaces. As they provide the same type of control as a gesture interface but with added dimensions. For now we’ll have to make due with what we can use and be glad there’s an app for that.