Persistence is the twin sister of excellence. One is a matter of quality; the other, a matter of time. – Marabel Morgan
As 5G cellular is emerging, the biggest problem for carriers is the need to know all that they can about their existing height assets, whether they be telecommunication towers, rooftops, or water towers.
In Canada, there are over 13,000 large cell sites today. In 2019, there were 395,562 mobile wireless cell sites in the United States. It is estimated that there are 4 million telecoms towers installed in the world, growing at a compounded annual growth rate of 4.1% to 2020. In 2014 the market for tower construction was estimated at $20.3 billion globally. By 2020 the total installed base will have risen from 4 million towers to 5 million towers.
These are huge numbers for cell sites. Now consider that the documentation for these legacy 3G and 4G sites is largely missing, grossly incorrect, or plagued with small but significant errors. Before 5G installations can begin in earnest, these critical details must be known. As a result, there is a panicked need to get the house in order and clean up this mess surrounding cell site situational awareness.
Most of the cell site knowledge is informally resident in the heads of the field crews who know their sites well. Yet, it is these same field crews who have failed to maintain or coded in the errors within the existing asset management systems that are there to keep the documentation packages up-to-date. Or, even as likely, the carriers have out of date asset management platforms that even if the crews wanted to update the asset library properly, these platforms create a serious barrier to their efforts.
Which antennas are on these towers? How many sectors are there installed? How high are they? Which way are they aimed? Do they have any mechanical down-tilt? And, if they do, then at what angle are they tilted? What condition are the cables in currently? Has the weatherproofing shrink-wrap blown off, deteriorated due to UV breakdown, or is it still in good condition? Understanding the who, where, when, why, and what answers of the cell sites is mandatory before the 5G renovation and retrofit begins.
As 5G also needs orders of magnitude of new small cell sites, the number of physical locations to understand deeply is about can climb to 300,000 or 500,000 in Canada during the next three years. With an explosive growth factors of 10x, 15x, or 20x or more, the number of new small cell sites globally could increase to many hundreds of millions. Possessing deep awareness about these sites is fast becoming an overwhelming task. Yet, it is now one of the most critical tasks that carriers must achieve. If the mobility carriers continue the way that they are now, all will be lost.
The A.I. Robotic Data Harvesting Solution
Along comes Artificial Intelligence, Drones, Asset Management Platforms, and specialized software to the rescue to harvest the field data and move it into these systems in order to construct a computerized virtual digital twin of the height assets.
Quite simply, a digital twin is a virtual model of a process, product, or service. This pairing of the virtual and physical worlds allows analysis of data and monitoring of systems to head off problems before they even occur, prevent downtime, develop new opportunities and even plan for the future by using simulations.
Digital twins could address all the wireless carrier’s challenges. A digital twin is the melting pot of many of the latest technologies including big data analytics, artificial intelligence (AI)/machine learning (ML), immersive experiences, the cloud, sensors, open standard APIs and 5G technologies, all of which are available to take telecommunications into the much-desired digital future. A digital twin could help providers intelligently design their services and networks and with its proactive monitoring and predictive maintenance functionalities it could potentially put an end to customer complaints.
Accuracy and precision are essential when gathering the digital twin data. The collection of faulty data will drive faulty decision-making downstream and can catastrophically disrupt carefully planned implementation projects and deployments creating serious losses in time and crew mismanagement.
All of this error will lead to consumer dissatisfaction due to delayed 5G cell site installations which will handcuff the new 5G smartphones now flooding the market.
These new smartphones are wildly expensive easily costing over $1,000.00 each, so consumers will expect to see immediate enhancements and benefits with exciting performance and new applications. Yet, if the carriers fail to deploy the millions of new and correctly retrofit the existing 5G cell sites, they will effectively leave the new 5G smartphones operating just like the older 4G models.
The ripple effect will be heard loudly as the backlash from consumers will boil over. The promise of 5G will be lost and the phone makers will suffer too. Breaking the trust with the consumer will have profound repercussions at a time when COVID has already applied significant damage to the cellular marketplace.
The dearth of adequate documentation on cell sites must be resolved now.
The elimination of surprises in the field is the principle objective. A secondary goal is to efficiently and with great detail, document exactly what assets are actually installed at each cell site. Surprisingly, the numerous updates, changes, repairs, and disassembles of cell sites has resulted in poor or missing documentation through the many previous generations – 1G, 2G, 3G, and 4G – and will only be further thrown into complete disarray with the advent of 5G cellular, composed of four new types of cellular sites – traditional large sites as well as the three small cell sites (Micro, Pico, and Fento).
So, How do we Fix this Problem?
The majority of 5G deployments will occur at high frequency bands including 3.5 GHz in Sub 6 GHz frequencies and 25 GHz to 40 GHz bands in mmWave. Today, operators use a disparate set of tools typically siloed between departments that do not provide a holistic view of the design, deployment, and operation of networks. This massive increase in site density cannot be supported with existing site selection methodologies.
Digital twin simulators enable infrastructure providers and network operators with a cohesive and intelligent view of their networks at city scale by combining the power of big data, machine learning, GIS, and RF planning in our proprietary digital twin network model. Whether entering a new geographic market, upgrading an existing network, deploying new spectrum bands, or launching a new technology provides network modeling and site selection to pick sites which meet network, user KPI and network economic targets.
A digital twin platform significantly reduces risk and uncertainty in large scale investments, eliminating the need for truck rolls and drive tests while accelerating the time to market, optimizing network investment, and providing an accurate and holistic view of networks.
You start by building a database of every existing location. You digitize any documentation and CAD drawings and attach them to the site profile. Pulling inventory assets into this database from whatever you know is critical as a starting point. And characterization parameters get loaded too. Information such as latitude / longitude of the site, the elevation of the terrain compared to sea level, the height of the tower or rooftop, how is the equipment sheltered, where are the antennas mounted. etc. all need to be collected and reported to the system.
The second step is to deploy a crew with a qualified drone. Crews must be fully licenced and trained and they must comply with all regulations. The weather needs to be suitable to fly the drone too. A specialized software package is downloaded into the drone controller to file the flight plan up and down and all around the tower. How you capture the data for a guyed tower is different than how you do it for a self supporting tower. Again, the way to capture the data for a water tower or rooftop are different too.
The drones get a lot of attention by the engineers (I call it techno-lust) as they have a technological attraction and are fun to an engineering mind. But, make no mistake, they are just tools to harvest massive amounts of big data. The data they collect is differential GPS accurate to less then 1 cm for precision elevation and azimuth bearing coordinates, a multitude of RGB photographic images taken by the drone camera, and LiDAR measurements recorded by the on-board laser systems.
The drone follows its flight plan and records its data. The data then needs to get uploaded to the cloud for analysis and processing.
Access to networks to upload this library of images and measurements can happen in real-time or in a store-and-forward manner. I prefer real-time uploads, or at least uploads in near real-time after the drone has been recovered. Due to the volume of data, the network needs to be robust and high speed. Since most towers permit cellular connections, this is an ideal location to upload data from the cell site itself. This way, the data can be confirmed on the cloud before the drone crew departs the site.
Cloud computing, storage, analytics, and AI are used to process the data. There is no need to wait a day or two for the drone crew to return to the offices to process the data. Validation of the initial ingest can be completed and the processing can be computed the same day or later that night when the crew is nearby the site. Due to the variability of cloud resources, this reduces the overall costs for compute power and costly local storage. With big data files, the volume of data can be expensive to manage other than on the public cloud. Resources are dynamic so you can pay for your use of these resources and not pay when they are not being used. Multiple crews can feed field data into the same shared platforms.
With AI, we can make sense of the data and stitch it all together into a 3D model. From the model and the raw data from the field, the AI can extrapolate measurements, angles, heights, and classify the assets used at the location. These parameters can be compared and contrasted with the initial documentation package for conflicts and additions can be checked by employees to finalize the profile package. This is a combined effort between man and machine to ensure the integrity of the data records in the profile.
With an API, the profile can be shared with an asset management system (AMS) and it is in the AMS that other data is correlated such as trouble tickets, replacement parts inventory, fix reports, test reports, repair visit reports, and inventory for the site. The AMS can flag irregularities and anomalies for the users.
Drone inspection means asset management teams can conduct comprehensive, repeatable, autonomous inspections and collect imagery that can be difficult to access by traditional means. The latest technologies, such as multi-rotor drones and advanced sensors, allow for site scanning to be conducted more efficiently and accurately.
In late 2017, Australia’s major telecommunications company, Telstra, had 9,000 operating mobile network sites, covering 2.4 million square kilometres. The opportunity to create value by increasing inspection efficiencies and capturing data for whole-of-life applicability is obvious. Drone inspection of power and mobile phone infrastructure means there is no physical requirement for a rigger onsite, so nothing needs to be turned off for the task.
The logic of using an AMS are as follows:
- Single source of truth
- Make informed asset capacity, refresh, and vendor decisions using accurate asset portfolio data
- Total cost of ownership tracking
- Determine asset costs throughout their actual lifecycles
- Asset governance control
- Control asset distribution and enforce policy, contract, and regulatory requirements
- Asset audit management
- Simplify audit preparations and strengthen change management risk calculations
- Asset provisioning
- Automate asset request, fulfillment, and ordering processes
- Service catalog
- Determine standard offerings for authorized users or groups within a service catalog
- Inventory management
- Manage stockroom inventory of hardware and consumable assets and define physical and logical stockroom hierarchies
Once you have a comprehensive profile for a specific asset, you can start to do some meaningful work. You can ask questions of the model. You can do “what if?” scenarios to test your design hypothesis and to explore alternate solutions. You can evaluate structural loading situations and consider what remediation is needed to fortify a tower with new lateral and diagonal struts. You can consider self interference for highly loaded locations and conduct regulatory compliance analysis. By doing all of this modelling within the digital twin, you can assess the budgets, work efforts, schedule and quality outcomes all within the safety of the model. Once the model is confirmed, then it can be built.
In the past, as I was forever seeking new and different ways to reduce costs, speed up implementations, and exceed quality and standards expectations, it was always worthwhile to extend the computer modelling beyond the digital twin to the implementation stages of a project. Let the modelling tools develop the cable run lists, cable labels, signage, asset inventory for the build, as well as order the parts, connectors, shrink wrap, cabling, workforce, and liaison with weather systems to schedule the work for the optimum time frame. Crew scheduling is an art form and computers can do it well. So, possessing a digital twin of the crews and their capabilities as well as their on-hand inventories of goods and materials greatly enhances the workflow and optimizes the utilization of the workforce teams.
A step that is often ignored or forgotten is to validate the build and refresh the AMS with the reality of the site. If drive tests are done then feed them back into the design and modelling tools to tweak the propagation modelling to reflect the reality of the coverage. By validating the model, the ‘as-built’ can inform the ‘to-be’ designs and the modelling is learning and improving. With AI, the more we reflect the outcomes back into the inputs, the modelling improves with each iteration. If there is one fact that is essential now it is agility, the ability to flex and bend with the new paradigms imposed by our rapidly advancing world of telecommunications.
The capability is there – so is the expertise. The answer lies in bringing them together effectively. That means adopting more of an ecosystem approach to digital twins and working with partners who can integrate not just the various systems, but the underlying design, build and operational expertise that informs development.
There is the all-important CAPEX vs OPEX question. Renewed capital constraints have placed the issue front and centre, but a successful digital twin strategy is based more on a TOTEX – total expenditure – approach. Digital twins can reduce OPEX by a significant amount, with direct consequences for profitability. But getting the necessary tech specs to deliver on those requirements will almost certainly rebalance the CAPEX and OPEX calculation. Again, this is something that the right systems integrator will be able to support.
The problem with digital twins to date has not been one of technology. It has been one of approach: of technology first rather than industry first. Above all, successful digital twins are about a successfully led ecosystem; no single company can do all this alone. Instead, strength comes from networks and a real alliance. That is how science fiction becomes a digital reality.
Loesche, R. (2020). How drones are changing the way we design, deliver and manage assets. Aurecon Group Brand Pty. Ltd. Retrieved on October 14, 2020 from, https://www.aurecongroup.com/thinking/thinking-papers/drones-future-design-delivery
Malim, G. (2018). How can the telecoms industry successfully adopt digital twins? Vanilla Plus. Retrieved on October 14, 2020 from, https://www.vanillaplus.com/2018/05/21/38426-can-telecoms-industry-successfully-adopt-digital-twins/
Marr, B. (2017). What Is Digital Twin Technology – And Why Is It So Important? Forbes. Retrieved on October 14, 2020 from, https://www.forbes.com/sites/bernardmarr/2017/03/06/what-is-digital-twin-technology-and-why-is-it-so-important/#19b45b892e2a
Martin, D. (2020). Informed Analysis and Integration Can Deliver on the Promise of Digital Twins. EN-R (Engineering News-Record, BNP Media. Retrieved on October 14, 2020 from, https://www.enr.com/articles/50121-delivering-on-the-promise-of-digital-twins
Unknown. (2020). Asset Management. ServiceNow. Retrieved on October 14, 2020 from, https://www.servicenow.com/products/asset-management.html?campid=28638&cid=p:itam:dg:nb:prsp:itam_prsp:ams:all&s_kwcid=AL!11692!3!416447326274!b!!g!!%2Basset%20%2Bmanagement%20%2Bsystem&ds_c=GOOG_AMS_All_EN_DEMANDGEN_ITAM_PRSP_NonBrand_BMM&cmcid=71700000065324482&ds_ag=IT+Asset+Management_BMM&cmpid=58700005783755345&ds_kids=p53695529414&gclid=CjwKCAjww5r8BRB6EiwArcckC8U5vBT4qM7tq0K5M7JIKqH_rlMoEBe8sh0sxtT5VmMyZkgOXl_rRhoCS4UQAvD_BwE&gclsrc=aw.ds
Unknown. (2020). Digital Twin Sim. Retrieved on October 14, 2020 from, https://www.digitaltwinsim.com/
About the Author:
Michael Martin has more than 35 years of experience in systems design for applications that use broadband networks, optical fibre, wireless, and digital communications technologies. He is a business and technology consultant. He offers his services on a contracting basis. Over the past 15 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He is a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX). Martin currently serves on the Board of Directors for TeraGo Inc (TGO: TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX: TSX.V). He has served as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology. He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) [now OntarioTech University] and on the Board of Advisers of five different Colleges in Ontario. For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section. He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has three undergraduate diplomas and five certifications in business, computer programming, internetworking, project management, media, photography, and communication technology. He has earned 20 badges in next generation MOOC continuous education in IoT, Cloud, AI and Cognitive systems, Blockchain, Agile, Big Data, Design Thinking, Security, and more.