The Internet of Things architecture has continually evolved and today is maturing at a rapid pace to bring new capabilities to deliver new outcomes for users. One aspect that has changed dramatically over the past two years is the location of intelligence within the network fabric.

Stage 1: Initial Constructs
Initially, these IoT networks were pretty dumb. They had sensors at one end and the basic data was moved to the control end. It consisted of simplistic data such as off / on, read values, or the most basic logic control based upon ‘if/then/else’ programming. It was similar to other old time command and control architectures like SCADA (supervisory control and data acquisition) and PLC (programmable logic control) systems.
Stage 2: Centralized Intelligence
Then, as cloud evolved, compute power was added and datagrams flew from the sensors to the cloud where they were then processed. It was an improvement and allowed other heavy iron solutions to be collocated such as early artificial intelligence (AI) systems. With the inclusion of AI in the cloud, intelligence was deciphered from the sensors and carried over the network. This produced derived data that was computed by the AI system and from other sources of data, be they from internal, external, or historical archives. This was a step forward and offered some interesting results, but was still very expensive and not as practical as originally hoped.
Stage 3: Lateral Intelligence

When Kubernetes and the whole trend for microservices evolved, it also impacted IoT in a positive manner. Not much was seen at first as the rebuild of cloud sites to modular components consumed everyone’s attention. But, as cloud evolved towards multi-cloud whereby applications were hosted in different places and collaborated, several issues had to be overcome. One was latency between clouds and the other was with API (application program interfaces) that had to interconnect apps located in different data centres that might be in different states, across the country, or even around the world. This interconnection capability was a challenge to accomplish, but once it started to achieve traction, the modular microservices were no longer limited to coexisting on only one cloud in one place and could be deconstructed to exist on different clouds or multiple clouds simultaneously.
Stage 4: Edge Intelligence
The next natural step was Edge Computing. With edge computing built into the network fabric, data could live at the front edges of the network instead of just at the back edges of the network. Different sizes and types of connections allowed data to flow both horizontally between cloud to cloud and vertically between the cloud and the edge. The data could now live on the network fabric instead of only in the cloud or data centre.
Microservices allowed applications to be deconstructed vertically from the edge to the cloud. In a way, this allowed the cloud to extend its applications to the edge. But, mostly, separate, stand alone apps ran at the edge and connected to larger applications in the cloud. The shared app strategy modelled over the edge computing bridged to the cloud computing is only now starting to be seen. It is expected to develop fast under the advancements for 5G cellular which shares this same architecture.

By having compute, storage, and analytics at the edge, data can reside much closer to the user and therefore not need travel to the cloud at all, unless it is desired to do so. But, if it does go to the cloud, it can do so in parallel to the edge processing procedures. This edge work reduces latency measurably and provides local results from sensors back to the users as advisements or to make steering feedback adjustments into the edge processing systems.
Stage 5: Federated Intelligence
Finally, we are now seeing compute, storage, and analytics built right into the sensors making them into ‘smart sensors‘. These extreme edge smart sensors can process data unto themselves. They can collaborate with the edge network processors and other extreme edge smart sensors, connected via a mesh topology, to share data on a east-west and north-south basis. These smart sensors can collaborate seamlessly with the edge computing and even transparently with the cloud computing.
The injection of internet access has historically only been at the cloud. Now, it can be injected at the edge and the extreme edge, which means that external data and legacy data can be combined with edge and extreme edge data. This is powerful and brings the instantaneous responses to the user’s smart tablets or the automated command and control devices fast – exactly when and where it is needed most. This smart network can become both real-time and non-real-time at the same time.

Example Of a Federated IoT Network
As an example, let us use agriculture and how to best manage weather to optimize the crops. If we have a greenhouse and there are sensors inside the greenhouse (light colour temperature, light duration and intensity, spectrum, air temperature, humidity, airflow, etc.), as well as outside on the property (add in wind speed, wind direction, wind chill, barometric pressure, etc), we can collect local weather at the site as well as interior weather within the site. We can paint a picture of how the outdoor weather will affect our crop grown indoors inside the greenhouses.
From external service provider sources, we can collect regional weather and national weather. These external sources of weather data can predict the weather’s impact from now measured in minutes, hours, days, or perhaps even a week in advance. Our record-keeping of legacy weather patterns, trends, impacts, and outcomes can reveal how the local weather and weather inside the greenhouse are affected by the movements of regional weather and even anticipate future interior impacts days in advance.
To be clear, the crop in the greenhouses is our mission. So, knowing the weather is simply a means to protect the crop and perhaps even enhance it.
By agilely making timely changes to the interior weather so we can counteract adverse impacts by adapting to the exterior weather. Our goal is to provide the crops inside the greenhouses with perfect weather 24/7, which optimizes growth and yields. The better that we can do this the healthy our product is and the faster it gets to market to realize our profits.

By sharing the intelligence in every stage of the ecosystem and making decisions based upon a federated architecture, we can outperform centralized architectures by many times. We bring real-time to our IT systems, as well as ‘just in time’.
By pushing the intelligence out to the edge, we can removing the negative impacts that non-real-time systems have on our solutions. The results are better quality, lower costs, and most importantly, speed to act.
————————–MJM ————————–
About the Author:
Michael Martin has more than 35 years of experience in systems design for applications that use broadband networks, optical fibre, wireless, and digital communications technologies. He is a business and technology consultant. He offers his services on a contracting basis. Over the past 15 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He is a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX). Martin currently serves on the Board of Directors for TeraGo Inc (TGO: TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX: TSX.V). He has served as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology. He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) [now OntarioTech University] and on the Board of Advisers of five different Colleges in Ontario. For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section. He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has three undergraduate diplomas and five certifications in business, computer programming, internetworking, project management, media, photography, and communication technology. He has earned 20 badges in next generation MOOC continuous education in IoT, Cloud, AI and Cognitive systems, Blockchain, Agile, Big Data, Design Thinking, Security, and more.