By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.

Eliezer Yudkowsky

A new day is now dawning for Artificial Intelligence. Its birth as an innovative technology has been happening for decades, but it is only now, in 2023, that the infant has begun to walk, We are about to enter the growth phase commonly referred to as the “terrible twos”.

As Dr. Jay Hoecker of the Mayo Clinic stated,

The term “terrible twos” has long been used to describe the changes that parents often observe in 2-year-old children. A parent may perceive this age as terrible because of the rapid shifts in a child’s mood and behaviors — and the difficulty of dealing with them. One minute your child might be clinging to you, and the next he or she is running in the opposite direction.

These changes, however challenging, are a normal part of child development. Two-year-olds undergo major motor, intellectual, social and emotional changes. Also, children at this age can understand much more speech than they can express — a factor that contributes to emotions and behaviors that are difficult for parents to interpret.

Two-year-olds are struggling with their reliance on their parents and their desire for independence. They’re eager to do things on their own, but they’re beginning to discover that they’re expected to follow certain rules. The difficulty of this normal development can lead to inappropriate behavior, frustration, out-of-control feelings and tantrums.

All of this description of a child’s early development stage is akin to what AI is going through now.

The time may have finally come for artificial intelligence (AI) after periods of hype followed by several “AI winters” over the past 60 years. AI now powers so many real-world applications, ranging from facial recognition to language translators and assistants like Siri and Alexa, that we barely notice it. Along with these consumer applications, companies across sectors are increasingly harnessing AI’s power in their operations. Embracing AI promises considerable benefits for businesses and economies through its contributions to productivity growth and innovation. At the same time, AI’s impact on work is likely to be profound. Some occupations as well as demand for some skills will decline, while others grow and many change as people work alongside ever-evolving and increasingly capable machines.

This is the promise of AI that we are all hearing today. But, is AI really here yet? Is it truly mature? Is it dependable and trustworthy? Or, is it just suffering from its early developmental phases and acting out like a two-year old?

AI Ethical Issues

AI has serious ethical implications. Because AI develops its own learning, those implications may not be evident until it is deployed. The story of AI is littered with ethical failings: with privacy breaches, with bias, and with AI decision-making that could not be challenged.

It’s therefore important to identify and mitigate ethical risks while AI is being designed and developed, and on an ongoing basis once it is in use. 

But many AI designers work in a competitive, profit-driven context where speed and efficiency are prized and delay (of the kind implied by regulation and ethical review) is viewed as costly and therefore undesirable. 

AI Regulations

Currently governments are playing catch-up as AI applications are developed and rolled out. Despite the transnational nature of this technology, there is no unified policy approach to AI regulation, or to the use of data.

It is vital that governments provide ‘guardrails’ for private sector development through effective regulation. But this is not yet in place, either in the US (where the largest amount of development is taking place) or in most other parts of the world. This regulation ‘vacuum’ has significant ethical and safety implications for AI. 

Some governments fear that imposing stringent regulations will discourage investment and innovation in their countries and lose them a competitive advantage. This attitude risks a ‘race to the bottom’, where countries compete to minimize regulation in order to lure big tech investment. 

The EU and UK governments are beginning to discuss regulation but plans are still at an early stage. Probably the most promising approach to government policy on AI is the EU’s proposed risk-based approach. It would ban the most problematic uses of AI, such as AI that distorts human behaviour or manipulates citizens through subliminal techniques. 

AI Privacy

Probably the greatest challenge facing the AI industry is the need to reconcile AI’s need for large amounts of structured or standardized data with the human right to privacy. 

AI’s ‘hunger’ for large data sets is in direct tension with current privacy legislation and culture. Current law, in the UK and Europe limits both the potential for sharing data sets and the scope of automated decision-making. These restrictions are limiting the capacity of AI. 

During the COVID-19 pandemic, there were concerns that it would not be possible to use AI to determine priority allocation of vaccines. (These concerns were allayed on the basis that GPs provided oversight on the decision-making process.)

More broadly, some AI designers said they were unable to contribute to the COVID-19 response due to regulations that barred them from accessing large health data sets. It is at least feasible that such data could have allowed AI to offer more informed decisions about the use of control measures like lockdowns and the most effective global distribution of vaccines.

Better data access and sharing are compatible with privacy, but require changes to our regulation. The EU and UK are considering what adjustments to their data protection laws are needed to facilitate AI while protecting privacy.

AI Bias

It’s no secret that people harbor biases — some unconscious, perhaps, and others painfully overt. The average person might suppose that computers — machines typically made of plastic, steel, glass, silicon, and various metals — are free of prejudice. While that assumption may hold for computer hardware, the same is not always true for computer software, which is programmed by fallible humans and can be fed data that is, itself, compromised in certain respects.

Artificial intelligence (AI) systems — those based on machine learning, in particular — are seeing increased use in medicine for diagnosing specific diseases, for example, or evaluating X-rays. These systems are also being relied on to support decision-making in other areas of health care.Recent research has shown, however, that machine learning models can encode biases against minority subgroups, and the recommendations they make may consequently reflect those same biases.

AI bias occurs because human beings choose the data that algorithms use, and also decide how the results of those algorithms will be applied. Without extensive testing and diverse teams, it is easy for unconscious biases to enter machine learning models. Then AI systems automate and perpetuate those biased models.

For example, a US Department of Commerce study found that facial recognition AI often misidentifies people of color. If law enforcement uses facial recognition tools, this bias could lead to wrongful arrests of people of color.

Conclusion

So, like a toddler, the state of artificial intelligence is still very immature, yet it is developing. It is advancing very fast so the issues above may all be resolved soon – hopefully. But, like a child, there are always several other phases of growth that need to be mastered until maturity is truly reached.

When it comes to learning, there is nothing quite like the mind of a young child. When they are born, their brain is still developing, and continues to do so for years to come. Although the human brain never stops changing throughout our lifetimes, in those early formative years, it is basically a machine for soaking up information and experiences.

For this reason – and because brain activity is famously hard to recreate artificially – it might just be the perfect starting point for AI.

References:

Hoecker, J. (2022). Infant and toddler health. Mayo Foundation for Medical Education and Research (MFMER). Retrieved on January 30, 2023 from, https://www.mayoclinic.org/healthy-lifestyle/infant-and-toddler-health/expert-answers/terrible-twos/faq-20058314

Jones, K; Buchser, M; and Wallace, J. (2023). Challenges of AI. Chatham House. The Royal Institute of International Affairs. Retrieved on January 28, 2023 from, https://www.chathamhouse.org/2022/03/challenges-ai

Marr, B. (2022). The Problem With Biased AIs (and How To Make AI Better). Forbes. Retrieved on January 28, 2023 from, https://www.forbes.com/sites/bernardmarr/2022/09/30/the-problem-with-biased-ais-and-how-to-make-ai-better/?sh=71c5c7194770

Nadis, S. (2022). Subtle biases in AI can influence emergency decisions. Massachusetts Institute of Technology. Retrieved on January 28, 2023 from, https://news.mit.edu/2022/when-subtle-biases-ai-influence-emergency-decisions-1216

Starr, M. (2014). AI learns like a real toddler. CNET. Retrieved on January 30, 2023 from, https://www.cnet.com/science/toddler-simulator-learns-like-a-real-child-in-real-time/

About the Author:

Michael Martin is the Vice President of Technology with Metercor Inc., a Smart Meter, IoT, and Smart City systems integrator based in Canada. He has more than 40 years of experience in systems design for applications that use broadband networks, optical fibre, wireless, and digital communications technologies. He is a business and technology consultant. He was a senior executive consultant for 15 years with IBM, where he worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He is a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX). Martin served on the Board of Directors for TeraGo Inc (TGO: TSX) and on the Board of Directors for Avante Logixx Inc. (XX: TSX.V).  He has served as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology. He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) [now OntarioTech University] and on the Board of Advisers of five different Colleges in Ontario.  For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section.  He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has three undergraduate diplomas and five certifications in business, computer programming, internetworking, project management, media, photography, and communication technology. He has completed over 30 next generation MOOC continuous education in IoT, Cloud, AI and Cognitive systems, Blockchain, Agile, Big Data, Design Thinking, Security, Indigenous Canada awareness, and more.