We are seeing avant-garde, next generation innovative development in the sensors used for the Internet of Things (IoT). Historically, we saw sensors that measured just one single parameter, perhaps temperature, humidity, or some fluid levels for liquids, like rainfall. Next, we saw several parameters collocated into a single sensor chip, so you could buy all three of these parameters in just one sensor. This reduced costs and enhanced functionality. Now, we are seeing something much more advanced, the combination of chips, with edge computing to aggregate data reads and to derive new data from these blended mixtures.
These groundbreaking concoctions of multiple sensor data and the near real-time computing, storage, and analytics at the edge is being used to replicate the human senses. We now have:
- Digital Sight
- Digital Hearing
- Digital Taste
- Digital Smell
- Digital Touch
A broadly acceptable definition of a sense would be “A system that consists of a group of sensory cell types that responds to a specific physical phenomenon, and that corresponds to a particular group of regions within the brain where the signals are received and interpreted.” There is no firm agreement as to the number of senses because of differing definitions of what constitutes a sense. So, for the sake of this IoT discussion, and at this stage of technological development, let us just stay with the five core senses.
Since the advent of the industrial age, mankind has created technology that replicated the physical aspects of life. We create systems that attempt to replicate what we already know from the physicality and perceptions of life.
Engineers are creating these five human senses from technical sensor formats to aid in some process or another, such as autonomous vehicles. Let us consider the art of the possible for these new IoT sensors arrays. I call them arrays, since at this point in the innovation process, because these solutions are a conglomerate of various measurements coming from multiple sensors, acting harmoniously towards a common arrangement for a derive outcome. Often with external data and legacy data being contributed into the algorithms to refine and optimize the results. As time moves forward and demand dictates, it is expected that these sensor arrays will coalesce into a single, unified integrated circuit sensor.
Vision systems were one of the first developments in sensor technology. We use vision systems everywhere. Many modern cars use cameras to detect the surroundings. But, vision systems are not just limited to cameras to see, we use infrared, radar, and LiDAR technologies too. Often, several vision system technologies are used in combinations to enhance the accuracy of the objects being observed to allow for depth perception, angle calculations, 3D shape analysis, and more.
Machine vision is the incorporation of computer vision into industrial manufacturing processes using a machine vision system, although it does differ substantially from computer vision. In general, computer vision revolves around image processing. Machine vision, on the other hand, uses digital input and output to manipulate mechanical components. Devices that depend on machine vision are often found at work in product inspection, where they often use digital cameras or other forms of automated vision to perform tasks traditionally performed by a human operator. However, the way machine vision systems ‘see’ is quite different from human vision.
Digital hearing is advancing rapidly. We use sound detection sensors like microphones to detect audio signals. Often we use multiple microphones to create directional vector detection or 3D sounds perception. These sounds are processed and analyzed to cleanse the sounds and to better understand exactly what the sensors are hearing.
On a recent project, for a huge bridge being constructed over a fast flowing river, it was proposed to use these acoustical sensors to monitor the sounds made by the structure’s supporting guy wires. These supporting guy wires make sounds as the wind passes around them. They are taut due to the loads from the bridge and they react due to the loads, forces, and the environment acts upon them – wind, resonance, temperature, humidity, water flows and currents, tectonic shifts, and more. They cause the guy wires to create harmonics. Think of them as being similar to the strings on a guitar being plucked by the musician. As a result, these all guy wires “sing” or resonate producing perceptible sounds. These individual guy wire sounds interact with other guy wire sounds and generate 1st, 2nd, and 3rd order harmonics. More lower orders of harmonics are generated too, but they diminish and interact to a lesser extent compared to the higher order harmonics listed. While humans may or may not hear this song, IoT sensors can hear it. The song is detected and analyzed. So, if a crack develops in the concrete roadways on the bridge, the song will vary with this physical change and this change in the acoustical patterns and trends allow the engineers to know of presence of the crack, even if it is undetectable to the eye, and precisely geolocate the crack for remediation.
Taste refers to the capability to detect the taste of substances such as food, certain minerals, and poisons, etc. The sense of taste is often confused with the “sense” of flavour, which is a combination of taste and smell perception. Flavour depends on odor, texture, and temperature as well as on taste.
Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue.
There are five basic tastes: sweet, bitter, sour, salty and umami (savory).
In the world of IoT, recently scientists created an artificial tongue to identified counterfeit whisky. Counterfeit products are hurting everyone. They are hurting economy, they are hurting actual owners of the brands, and they are hurting the consumers who are not getting exactly what they paid for. Identifying counterfeit products is not that easy and requires an expert on a particular kind of alcohol. Now scientists from the University of Glasgow created an artificial “tongue”, which can taste, which beverage is legit.
Smell or olfaction is the other “chemical” sense. Unlike taste, there are hundreds of olfactory receptors – 388 according to one source – each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what we perceive as the molecule’s smell.
In the brain, olfaction is processed by the olfactory system. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. Some neurons in the nose are specialized to detect pheromones.
An electronic nose is a device intended to detect odors or flavours. The electronic nose was developed in order to mimic human olfaction that functions as a non-separative mechanism: i.e. an odor / flavor is perceived as a global fingerprint. Essentially the instrument consists of head space sampling, sensor array, and pattern recognition modules, to generate signal pattern that are used for characterizing odors.
Rots are one of the biggest in-store challenges for potatoes and if they remain undetected, they can quickly pass from one potato to another. This means timely decisions need to be made before the disease becomes established to prevent rejection of consignments and the resulting financial losses.
The first signs of rot are normally perceived by a store manager’s nose once the rot has already started. However, the University of Warwick in the United Kingdom has developed a new tool which replicates the functions of the human nose to help detect odours well below the human level – and perhaps even before symptoms occur.
The electronic ‘nose’ works by sampling the store air and then using an array of gas sensors that respond to different odours produced, which together can detect bacterial infection from other odours in the store.
Touch or somatosensation, also called tactition or mechanoreception, is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary.
A tactile sensor is a device that measures information arising from physical interaction with its environment. Tactile sensors are generally modelled after the biological sense of cutaneous touch which is capable of detecting stimuli resulting from mechanical stimulation, temperature, and pain (although pain sensing is not common in artificial tactile sensors). Tactile sensors are used in robotics, computer hardware and security systems. A common application of tactile sensors is in touchscreen devices on mobile phones and computing.
Tactile sensors may be of different types including piezoresistive, piezoelectric, capacitive and elastoresistive sensors.
We have robots that can walk, see, talk and hear, and manipulate objects in their robotic hands. There is even a robot that can smell.
But what about a sense of touch? This is easier said than done and there are limitations to some of the current methods being looked at, but we are developing a new technique that can overcome some of those problems.
For humans, touch plays a vital role when we move our bodies. Touch, combined with sight, is crucial for tasks such as picking up objects – hard or soft, light or heavy, warm or cold – without damaging them.
In the field of robotic manipulation, in which a robot hand or gripper has to pick up an object, adding the sense of touch could remove uncertainties in dealing with soft, fragile and deformable objects.
There are still details that can be tricky to infer from switching sensory modes, like telling the color of an object by just touching it, or telling how soft a sofa is without actually pressing on it. The researchers say this could be improved by creating more robust models for uncertainty, to expand the distribution of possible outcomes.
In the future, this type of sensory model could help with a more harmonious relationship between vision and robotics, especially for object recognition, grasping, better scene understanding, and helping with seamless human-robot integration in an assistive or manufacturing setting.
“We are just at the first method that can convincingly translate between visual and touch signals”, says Andrew Owens, a postdoc at the University of California at Berkeley. “These methods have the potential to be very useful for robotics, where you need to answer questions like ‘is this object hard or soft?’, or ‘if I lift this mug by its handle, how good will my grip be?’ This is a very challenging problem, since the signals are so different, and these sensory models have demonstrated great capability.” So, the journey to replicate the human senses continues, in fact, it is still just in its infancy. But, the future looks promising, very promising.
About the Author:
Michael Martin has more than 35 years of experience in systems design for broadband networks, optical fibre, wireless and digital communications technologies.
He is a business and technology consultant. Over the past 14 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He is a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX).
Martin currently serves on the Board of Directors for TeraGo Inc (TGO: TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX: TSX.V).
He serves as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology.
He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) [now Ontario Tech University] and on the Board of Advisers of five different Colleges in Ontario. For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section.
He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has diplomas and certifications in business, computer programming, internetworking, project management, media, photography, and communication technology.