There is a new trend developing surrounding three of the hottest emerging technologies.  This trifecta of innovative technologies are: Augmented Reality (AR), Virtual Reality (VR), and Internet of Things (IoT).

The reason for this trend is due to a serious problem occurring in the Information Technology (IT) and Operational Technology (OT) worlds today.  How do we makes sense of all the data that we receive from our networks?  As the networks evolve, the volume of data has grown exponentially and it has become a massive tsunami that is consuming our people and processes.  Even if we can get ahead of it, making sense of the Big Data in a timely manner and sharing it with others in a meaningful, practical and cost effect way is hard work.  Therefore, a new approach is needed to process the Big Data and to visualize it in a quick and easy fashion.

The really smart guys at IBM have developed a framework for tackling Big Data; centered on four “Vs” (though there are those who would argue there are maybe even six Vs):

  • Volume – though “big data” doesn’t need to be of any specific size, we can safely say that you will not be able to load big data sets into Microsoft Excel
  • Velocity – just how fast data is being received, as well as how quickly the data needs to be analyzed so it can be used to make meaningful decisions
  • Variety – the number of data sources that make up your datasets, including sensor data, plain text, rich documents, video, social analytics, etc.
  • Veracity – how reliable your datasets are, which is especially important because if you cannot trust the data in the first place, no amount of analysis will yield good results.

There is one “V” that has not yet received a lot of attention: Visualization.  Even with the incredible exponential increases we see in computing power year-over-year, our need to consume data far outstrips our ability to process it (cognitively or otherwise), and there is a point at which even data science is more of an art in practice.  Visualization already plays a crucial role in data science, helping data scientists make sense of the structure and underlying patterns that may be held within the data, even before any serious computation begins.

Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented), by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Information about the environment and its objects is overlaid on the real world. This information can be virtual or real, e.g. seeing other real sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with where they actually are in space.

Virtual Reality (VR) – Most up-to-date virtual realities are displayed either on a computer screen or with a high definition VR special stereoscopic displays, and some simulations include additional sensory information and focus on real sound through speakers or headphones targeted towards VR users.  Some advanced haptic systems now include tactile information, generally known as force feedback in medical, gaming and military applications.  Furthermore, virtual reality covers remote communication environments which provide virtual presence of users with the concepts of telepresence and telexistence or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove or omnidirectional treadmills.  The simulated environment can be similar to the real world in order to create a lifelike experience – for example, in simulations for pilot or combat training – or it can differ significantly from reality, such as in VR games.

The Internet of Things (IoT) is the network of physical objects – devices, vehicles, buildings and other items embedded with electronics, software, sensors, and network connectivity – that enables these objects to collect and exchange data. The Internet of Things allows objects to be sensed and controlled remotely across existing network infrastructure, creating opportunities for more direct integration of the physical world into computer-based systems, and resulting in improved efficiency, accuracy and economic benefit; when IoT is augmented with sensors and actuators, the technology becomes an instance of the more general class of cyber-physical systems, which also encompasses technologies such as smart grids, smart homes, intelligent transportation and smart cities. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure. Experts estimate that the IoT will consist of almost 50 billion objects by 2020.

The magic is in the combining of these three technologies.  With so much Big Data, we need a means to make sense of the data.  We need to make the data visual and to permit it to be looked at in both 2D and 3D.  We need to manipulate the data in real-time or at least in near real-time.  We can no longer wait days for the data to be crunched and some wildly meaningless print-out to spit out page after page of digits.  These three technologies will permit the user to become immersed within the data and fly it around us in a visual form that shows trends, patterns, variations, statistical outliers, structures, correlations, connections and disconnections, and much more.  They will place the data into context for the user by placing the user inside the data.  The solution is to make the data outcomes visual.  Data visualization is the next big trend in the industry and it is being built upon AR, VR, and IoT.



About the Author:

Michael Martin has more than 35 years of experience in broadband networks, optical fibre, wireless and digital communications technologies. He is a Senior Executive Consultant with IBM Canada’s GTS Network Services Group. Over the past 11 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He was previously a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN:TSX). Martin currently serves on the Board of Directors for TeraGo Inc (TGO:TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX:TSX.V).  He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) and on the Board of Advisers of four different Colleges in Ontario as well as for 16 years on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section.  He holds three Masters level degrees, in business (MBA), communication (MA), and education (MEd). As well, he has diplomas and certifications in business, computer programming, internetworking, project management, media, photography, and communication technology.



nGrain. (2016). 3 reasons why “visualization” is the biggest “V” for big data. Retrieved on March 1, 2016 from,

Wikipedia. (2016). Definitions for AR, VR, and IoT.