Reading Time: 6 minutes

“Trusting AI is not a matter of faith, but of evidence. When transparency ends, manipulation begins, and the machine stops serving humanity and starts shaping it.” – MJ Martin

Introduction: The Hidden Cost of Convenience

In our rush to embrace the remarkable capabilities of artificial intelligence, we must not overlook a stark and growing reality: when we share information with AI systems, we often give up far more than we realize. We must be careful what we share with AI. The promise is alluring. Yet beneath it lurks profound privacy risks, data-harvesting mechanisms and training protocols that benefit large organizations more than the individual user. The message here is unequivocal: there is good in this technology, but the risks are immediate, severe and under-discussed.

The Core Risk: Training on User Data

At the heart of the matter is the fact that many AI systems, especially the large language models (LLMs) and generative systems we interact with, are trained on vast troves of data. Some of that data originates in public sources, but increasing amounts come from users: conversations, prompts, uploads, and images. The Stanford Institute for Human-Centered Artificial Intelligence (HAI) warns that user inputs may train future models unless explicitly excluded. In other words, your apparently private query may become part of a training set that shapes responses for you and for others in the future.

This leads to the first major risk: data reuse without informed consent. Many users assume that what they share is ephemeral or personal, invisible beyond the immediate exchange. In fact, when the input becomes training data, it may be stored, indexed, re-used, and referenced indirectly or directly in future responses. Privacy researchers have documented that training datasets can inadvertently contain sensitive or identifying information. The legal basis for such processing is often opaque, and users frequently lack any realistic ability to opt out, delete their data, or even know where their words or images have gone.

“Trusting AI is like handing a mirror to a stranger. You may see yourself reflected clearly; or distorted by motives you cannot see.” – MJ Martin

Data Leakage and Model Regurgitation

Even if the prompt you share is not explicitly re-published, models can sometimes reproduce snippets of training data or infer private details. Moreover, technical research emphasizes that the risks extend well beyond simple memorization: they include inference attacks, context leakage at runtime, retrieval-augmented generation, and agentic systems that draw from multiple sources. In other words, the system might “know” more about you than you realize, and that knowledge might escape its intended confines.

This reality contradicts the perception that AI conversations are sealed in digital vaults. They are not. Each exchange has the potential to feed a self-improving system designed to absorb, recall, and recombine human data in ways even its creators may not fully anticipate.

Lifecycle Risks and Opaque Retention

The lifecycle of AI systems creates new attack surfaces and enduring privacy harms. The European Data Protection Board (EDPB) has outlined how data flows – from collection, training, and deployment through to inference and fine-tuning – introduce risks at every stage. These include inadequate anonymization, aggregation risks, and hidden retention of chat logs or training feedback for years. What seems like a harmless conversational prompt today may reside indefinitely in an opaque dataset accessible to unknown actors or future versions of the model.

This continuity of data retention and reuse challenges established notions of digital privacy. Once shared, your data may never truly be deleted. It may persist in model weights, distributed backups, or derivative datasets. That permanence redefines what it means to lose control of personal information.

“AI does not deserve trust simply because it performs well. It earns trust only when it demonstrates transparency, accountability, and respect for the people whose data built it.” – MJ Martin

The Power Imbalance: Who Really Owns Your Data?

A troubling asymmetry exists between users and the organizations that build AI systems. Individuals sharing information typically have little to no knowledge of how their content will be used, or the ability to negotiate terms. Organizations building AI systems have far more resources and legal leverage than any individual user.

Stanford HAI’s white paper Rethinking Privacy in the AI Era argues that traditional notice-and-consent frameworks are inadequate for the scale and architecture of modern AI systems. When users share health symptoms, upload personal photos, or input confidential work material, they have no meaningful say over whether that data is used for training, nor what future inferences will be drawn from it. This undermines autonomy and agency, the very principles upon which ethical AI is supposed to rest.

The Regulatory Gap: Laws Struggling to Keep Up

Though there are admirable efforts to update privacy laws, AI’s novel architectures often outpace regulation. The Stanford AI Index report highlights a rise in AI-related incidents involving privacy and data breaches, yet many organizations remain under-prepared. Many technical studies focus narrowly on data memorization rather than the broader and more insidious forms of privacy threat: inference, context-based learning, and agentic behaviour.

Regulatory frameworks such as the EU’s AI Act, Canada’s proposed Artificial Intelligence and Data Act (AIDA), and various provincial privacy laws have begun to address these issues. Yet their enforcement mechanisms lag behind the speed of AI innovation. As a result, users remain exposed in a system whose governance is still a work in progress.

“To trust AI without oversight is to surrender judgment to an algorithm. Technology should inform human choice, never replace it.” – MJ Martin

Is There Any Good Here?

It would be disingenuous to paint the story as entirely bleak. There are genuine benefits in AI technology. It can enhance productivity, accelerate education, assist people with disabilities, and automate tedious tasks. AI can democratize creativity and make sophisticated tools available to ordinary citizens. These are profound advantages.

But the good cannot justify unlimited surveillance or the silent harvesting of personal data. The good exists, but only if it is pursued alongside accountability, transparency, and the right to be forgotten. Without those guardrails, the good becomes complicit in harm.

The Collective Dimension

One major oversight in public discussion is that privacy is not just personal, it is collective. When individual data is aggregated, traced, or inferred, the harm may extend beyond the original user. It affects families, workplaces, communities, and even nations. Profiling, discrimination, and the chilling of free expression all stem from collective consequences of private exposure.

Another missed dimension is time. Data used for training may persist indefinitely, evolve in unforeseen ways, and resurface years later. We often assume that the risk is immediate, but the real danger lies in its permanence. What we whisper to an AI today might re-emerge as part of a future system that knows us better than we know ourselves.

Finally, there is the illusion of safety. Because we are not developers or corporate data scientists, we assume our small contributions are inconsequential. Yet every prompt, upload, and image incrementally trains the next generation of AI. Every user is a contributor, whether they consent or not.

“Every time we trust an AI without question, we give it a little more power to define what truth means. Trust must be conditional; or it is not trust at all.” – MJ Martin

Summary: the Self in the Age of Machines

We must be careful what we share with AI. Until transparency, governance, and meaningful consent are built into the infrastructure, users should treat AI systems as data sinks, potentially harvesting everything they receive. Being cautious does not mean rejecting AI, but approaching it with awareness and self-protection.

The benefits of AI are real, but they depend on the preservation of human dignity, autonomy, and privacy. If we ignore the risks for the sake of convenience, we risk building a world where our private thoughts, drafts, and ideas feed invisible engines without our permission. The good in AI will only be realized if we insist on ethical use and regulatory restraint.

Privacy is not an obstacle to innovation. It is the condition for trust. And in the age of artificial intelligence, trust is the rarest and most valuable resource of all.


About the Author:

Michael Martin is the Vice President of Technology with Metercor Inc., a Smart Meter, IoT, and Smart City systems integrator based in Canada. He has more than 40 years of experience in systems design for applications that use broadband networks, optical fibre, wireless, and digital communications technologies. He is a business and technology consultant. He was a senior executive consultant for 15 years with IBM, where he worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He is a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX).

Martin served on the Board of Directors for TeraGo Inc (TGO: TSX) and on the Board of Directors for Avante Logixx Inc. (XX: TSX.V).  He has served as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology. He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) [now Ontario Tech University] and on the Board of Advisers of five different Colleges in Ontario – Centennial College, Humber College, George Brown College, Durham College, Ryerson Polytechnic University [now Toronto Metropolitan University].  For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section. 

He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has three undergraduate diplomas and seven certifications in business, computer programming, internetworking, project management, media, photography, and communication technology. He has completed over 60 next generation MOOC (Massive Open Online Courses) continuous education in a wide variety of topics, including: Economics, Python Programming, Internet of Things, Cloud, Artificial Intelligence and Cognitive systems, Blockchain, Agile, Big Data, Design Thinking, Security, Indigenous Canada awareness, and more.