Reading Time: 13 minutes

Over the past few years, we have seen the rapid demise of the private data centre in favour of cloud computing.  Gartner Research VP Dave Cappuccio said the analyst firm expects 80 percent of enterprises will have shut down their traditional data centers by 2025 – up from just 10 percent in July 2018.

933e702d12579449cc517711fb2ca478

The mass exodus from private to public data hosting has been progressing at an astonishing rate.  This move has not meant a reduction in your own staff, in fact, it is the opposite as the use of public cloud is complex and difficult.  So, you need your own people to manage the transition and later, the operation.  While you will shed many tasks and responsibilities to the selected cloud providers, other new obligations arise that must be addressed.  So, what is motivating this shift?

  • Leveraging someone else’s infrastructure.  The requisite infrastructure such as HVAC, power conditioning, UPS, space, racks, patching, updates, security, and more costs big dollars.  If you can shed some or all of these burdens, then that is a smart move.  These costs are shared over many users so this is how the savings are realized.  They are not dedicated infrastructure just for your business.
  • Scalability is likely the number one reason to move to cloud.  Businesses must be agile to survive today.  They need to scale up and down as necessary and do it fast.  By outsourcing to a public cloud provider, you can contract for this level of agility.
  • Regionalization is a popular answer too.  Placing your workloads closer to the end users is smart and provides a higher level of application performance.  In the older centralized private data centre model, you put your data in one place and your customers were often elsewhere.
  • Shrinking IT budgets is a strong motivator as the infrastructure for your own private data centre is expensive and by moving to a public cloud model you can save capital.  That capital can be spent on profit centres and not cost centres, this is a far better use of your limited capital resources.
  • The shift from a CapEx based model to an OpEx based model is now a very compelling driver.  How does this accounting impact the migration to public cloud?  Some say it is an essential point in the decision process.  As Canada leaves GAAP behind and adopts IFRS accounting rules, the way that OpEX is addressed is different.  IFRS is significantly different for the requirements for lessees. With limited exceptions, all leases are “on balance sheet” and result in the recognition of an asset and a liability.  This is very different from the old GAAP days.
  • Going hand-in-hand with the OpEx model is a fixed cost per seat strategy.  This ties into the scalability aspect and by having fixed OpEx costs per seat you can budget much better and offer a more competitive offering to your customers.  Financiers like to know your exact costs and it is vital that they be predictable costs for them to smile.
  • Workplace flexibility is often cited as a reason too.  This is not to say that you cannot work from home with a private data centre, but native cloud is built upon remote connectivity, so it is an inherent aspect of cloud to work from any location.
  • Acquisitions often underpin a cloud model since companies find merging disparate data centres difficult and expensive, so a move to cloud creates a common, shared platform that makes these integrations easier.
  • Access to regulatory compliance cloud based tools is another reason to shift to cloud.  If you are in a regulated industry like finance or healthcare, buying specialized compliance tools can be challenging.  Many cloud operators have these tools readily available for an affordable fee.
  • People and staffing represent one of the biggest arguments against maintaining a private data centre.  The cloud operators can afford to hire the best and the brightest workers to maintain the cloud as these costs are spread-out over multiple tenants.  So for the private data centre operators, it is difficult to compete and to secure this same caliber of talent.
  • Security must be on every list for every type of business model.  The threats to your data, reputation, customers, and business existence have never been greater.  The theory is that you are better protected on the cloud then in your own data centre.

Edge computing circuit diagram explained by IT expert

So, now that we have some idea why we are all moving away from our own data centres towards a public cloud model, what is the logic behind the controversial title of the article?  Why would I even suggest that cloud is dead?  It sounds like it is just getting started?

It is often said that cloud computing is simply moving your data and applications from your own data centre to someone else’s data centre.  And largely, for most companies, that have migrated to cloud, this statement rings of truth.

The Advent of Edge and Federated Computing

Now, there are many other drivers at play today, or about to emerge, that will dramatically impact the virtualization of your data and applications.  In fact, these drivers are so compelling that the current metamorphosis from data centre to cloud will predictably continue right past the current centralized cloud model towards a new and more innovative model, a model built upon federated networks and edge computing.

Cloudlet-architecture

So what is driving this continuation to something as different as edge computing?  Something that is so disruptive that even at the arrival of cloud computing, the industry is still at full speed like a freight train about to race right past this cloud station stop.  The next iteration is not just a cloud model, nor even a multi-cloud model, but a model where the data lives ubiquitously on the network fabric and around it’s parameters at the edge.  Here is why a federated, edge computing model is truly the next destination:

  • Core business models – Business is changing so fast that we live in a business world that is based upon absolute extremes of “feast or famine”.  Therefore, agility is mission critical.  Some mobile apps have a maximum lifespan of 18 months or less.  So, we cannot use the classic development cycles to create them as they will be obsolete before they even hit the market.  And, it is not just mobile apps, it is all applications.  The time of business and the pace of change is driving and demanding even more changes to the way we work.
  • Network congestion – Currently almost all data we generate is sent to and processed in distant centralized clouds.  The cloud is a facility that provides virtually unlimited computer power and storage space over the internet.  This mechanism is already becoming impractical, but by the time billions more devices are connected, delays due to congested networks will be significant.
  • Virtualization – Everything in IT and OT is becoming virtualized.  We are seeing data centres, LAN, WAN, CPE, and computers move from a hardware defined model to a software defined model.  By separating the data plane away from the control plane, we gain immense flexibility with separated control.
  • Microservices – Here is the game changer: If your company’s development team is smart and skilled enough to build microservices, then these functions can be delivered and deployed wherever it makes the most sense to be executed and managed.  Your applications can be federated.
  • Where does your data live? – Since 95% of all stored data is never looked at, or even monetized in any way whatsoever, companies are starting to question the rationale of storing data on the cloud.  If algorithms are run at the edge, does it really make sense to move the resulting raw data to the cloud in the centre for archiving?  Why not simply move the derived data?  The lifespan and obsolescence of data suggest that saving raw data has little or no intrinsic business value.  If we can easy reproduce data upon demand at the edge, then why store it centrally?  If we have the derived data, do we really need the raw data too?
  • Network costs – Likewise the costs to transit the data over already congested networks makes little sense too, especially if it offers no way to be monetized or be of value to the company.  I saw one customer paying over $2 million per month to transport data nationwide with nearly 80% of that same data possessing zero monetary value to the business.  By transporting only the data with value, the derived or summary data, the network savings were substantial.
  • Data sources – With many new edge applications, we see three types of data.  There is real-time data harvested from sensors, microcomputers, devices, users, and a myriad of other sources.  There is external data that comes from third party providers, an example might be weather data, while it must be timely (4 times per day), it may not need to be real-time (sub 100 ms).  And, there is legacy data, that is provided by history logs, to show patterns, trends, and relationships.  All of these types of data combine to make a composite for the computing actions.
  • Peer to peer – With edge computing, data can be shared at the edge inter-node to other edge servers.  So, this edge meshing saves transiting the same data over the network to multiple destinations from the cloud.  A mesh or spoke and hub model makes far more sense and cost far less to operate compared to the classic star design.
  • Edge computing – Edge computing is a disruptive new technology, still in its infancy, yet it offers a powerful solution.  Delays will be reduced by processing data geographically closer to the devices where it is needed, that is, at the edge of the network, instead of in a distant cloud.  For example, smartphone data could be processed on a home router, and navigation guidance information on smart glasses could be obtained from a mobile base station instead of the cloud.  By pushing the intelligence closer to the edge, the data becomes alive and far more powerful.  We can then do new functions that could not be done in the centralized model.  In healthcare, if we extend services to the patients home, you would not like to wait for a pacemaker to trigger a defibrillator routine while the data travels to the cloud and back. You want instant reactions that edge computing provides so easily.
  • Federated Network Architectures – Federating networks means to share resources among multiple independent networks and nodes within the network fabric in order to optimize the use of those resources, improve the quality of network-based services, and/or reduce costs.
  • SecurityFederated security allows for clean separation between the service a client is accessing and the associated authentication and authorization procedures. Federated security also enables collaboration across multiple systems, networks, and organizations in different trust realms.  What is critical in the next generation of federated networks is that the security must be federated too.  Meaning the security architecture must map to the new network architecture.  This means the old ways of wrapping a massive firewall around the centralized data centre is no longer the only or best strategy.  A Zero Trust model that is federated over the network fabric is necessary.  Security is a very complex topic and not fully understood in a federated model yet. More learning, design, testing, and practical experience is required. 
  • Real-time – Time matters.  The shift away from a model of delayed batch processing to real-time processing is affecting users.  But, how do we define real-time?  What is time?  The answer varies depending upon the applications.  For a voice application, we desire no more than sub 100 ms delays. However for a video gaming application, 50 ms delays may be the outer limit, and for a smart grid application, sub 10 ms delays must be achieved.  So the time domain drives the move away from the long delays of a centralized architecture out towards the ultra low delays available from edge computing models.
  • The rise of 5G and IoT – Both of these emerging solutions are built with edge computing as a native attribute of their core design.  So, the location of data is going to be federated anyway due to the rise of 5G and IoT.
  • Data residency – Regulatory demands will restrict where data lives.  Can the data cross a border?  What will be the rules for its protection when it lives in another country with differing legislation for privacy and security?  With cloud, the resiliency strategy, which is tied to back-up data centres is critical.  However, often these back-up sites are in other countries.  Now, technically this might be fine, but is it acceptable politically?
  • Multi-tenancy – Most of the moves from older data centres to cloud were just “lift and shift” moves of clunky monolithic single customer applications and did not result in a flexible microservices based model.  So most of the applications are limited and very clumsy on the cloud.  They are not cloud native and therefore not actually optimized for the cloud.  So, how will these same applications survive in a federated model?  Clearly they will not.  Applications will need to be rebuilt into a microservices model that supports federation of the data and fits with this new architecture.  Once this is done, they can be configured to service multiple customers at the same time.  The only way to reduce costs is to share systems and resources, therefore, the multi-tenancy model is vital for reducing costs.
  • Latency – Where is the data coming from, where does it need to be, when does it need to be there, and where does it go?  The answers to these questions all offer differing latency performance criteria.  Latency is tied to the real-time topic and is the metric for the definition of real-time.
  • Infrastructure – One of the chief advantages of cloud or even private data centres is the underlying infrastructure to support it all.  By centralizing the data and the applications, we can consolidate the infrastructure too.  This permits us to build a quality infrastructure that protects and extends the life of our resources.  However, what happens in this federated model when edge computing is ubiquitous and scattered all over?  Will we have the right HVAC air handling capabilities?  Will there be clean and stable power?  What about the administration of this federated edge model?  How will technicians get to these machines and service and support them?  Not all answers are known yet, but centralized administration and orchestration will contribute to these answers.
  • Statelessness – The web introduced the concept of RESTful services: functions that can be invoked, executed, and deliver their results using HTTP protocol.  Strangely, what the term “RESTful” originally translated to, is no longer what it does at all: Such a function does its work without having to maintain data about the program or the server that called it.  Yet, there is no “representational state transfer.”  And because it’s stateless, it can perform as part of a distributed application without the need for synchronicity and direct oversight – two services which the web, by design, cannot provide.  However, truly useful distributed applications require synchronicity and oversight – or rather, orchestration.  In 2013, the ideal of orchestration first revealed itself through Docker, the first viable system for packaging and deploying applications – in whole or in part – so they could run essentially anywhere.  Entire applications could become subdivided into containers – packages of functions that could communicate with each other, again using IP protocol.  The barriers of the virtual machine that restricted applications to these inescapable VMs, were effectively broken.  Now, applications will live at many places simultaneously, in the cloud, on multiple clouds, and at the edge.
  • Omnipresent – These applications will be anchored on the cloud, but also transparently extended to the edge in the form of baby clouds that I like to call, cloudlets.  So, the applications will become universal.  They will be present at many places at once.  Cloudlets are the future.

Is cloud already obsolete?

Cloud data centres are facilities concentrated with processing and storage capabilities across the globe.  They are one of the central planks of modern economies.  Today they are required as critical infrastructure because very little processing can be done between the user device and the cloud; but once processing is done at the edge, the dependency upon the central role of the cloud will change.

the-case-for-vm-based-cloudlets-in-mobile-computing-10-638

The massive storage and scalable resources available in the cloud will obviously not be accessible at the edge with its limited computing and storage capabilities, but the edge will become central for real-time processing.  However, an array of many cloudlet edge servers can equal or exceed a single cloud site.  This distribution of resource is built on the logic that if you want to go farther, you go together, as opposed to the older logic, if you want to go faster, go alone.  The power is in the collective.  Many nodes working as one harmonious and unified system.

We see this design in nature all the time, think of how bees function in the hive.  Bees within a colony work together.  A strong colony culture of collaboration, cooperation, and trust happens uniformly and automatically at every place within the hive.  There is a belief that being “in colony” will produce something exceptional, far greater than doing it alone.  The beehive honey comb pattern exemplifies this concept of being interdependent and united.  The leader’s role is to build the colony culture. 

The leader defines the “what” and the “why” and lets the staff define the “how.” 

To extend this metaphor to this federated thesis, the cloud will be in the leader’s role and define the “what” and the “why”, while the cloudlets will execute the “how” aspects.  The organization of the LAN and WAN networks are the honey comb structure of the colony, and they will define how the cloudlets are coupled – tightly, loosely, or not at all.  These cloudlets will “make and break” on the demand of the users and as driven by the application itself.  The cloudlets can work autonomously; which is to say completely independently, or they can act to varying degrees of harmony, as a unified interconnected flat composition that functions both laterally and vertically.  The cells in the human body follow a similar architecture.  There are many other models found in nature that exhibit this same mesh design.

It is important to note that the edge will not have an existence of its own without the backing of the cloud, but the cloud will become a much more passive technology since resources required for processing and / or storage will be decentralized along the cloud / edge continuum.

Smart city concept. IoT(Internet of Things). ICT(Information Communication Technology).

Processing a user’s data on servers located at the edge without leaving a data footprint outside the local network is more secure than leaving the entire data on the cloud.  More public edge devices, such as internet gateways or mobile base stations, will have the data footprint of many users.  So the systems required to fully protect the edge are still a major investigative focus.

Questions remain to be answered throughout the adoption process, but the inevitable conclusion is clear: the edge will change not only the cloud’s future, but also those of us who depend on it every day.

So, is cloud actually dead?  Perhaps, yes, as we know it today, it is.  However, it is a key component of the federated model, so it will not actually go away, however its role will be greatly diminished.  It will continue to morph into something new and different from what we see today.


About the Author:

Michael Martin has more than 35 years of experience in systems design for broadband networks, optical fibre, wireless and digital communications technologies.

He is a business and technology consultant. Over the past 14 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He is a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX).

Martin currently serves on the Board of Directors for TeraGo Inc (TGO: TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX: TSX.V). 

He serves as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology.

He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) [now Ontario Tech University] and on the Board of Advisers of five different Colleges in Ontario.  For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section. 

He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has diplomas and certifications in business, computer programming, internetworking, project management, media, photography, and communication technology.