TLDR; Most coaches feel stuck: use AI with client data and risk their privacy, or avoid AI entirely and fall behind. The problem is we're asking the wrong question. Instead of "should I use AI with client data?" we should be asking "which AI architectures actually protect confidentiality?" There's a critical distinction most coaches don't know about, and it changes everything.

Watch the Video

Prefer to read? Full post below.

The Starting Point We All Need

Colin Scotland recently posted something that got me thinking:

"Never put sensitive client data into AI tools. No exceptions. No excuses. Non-negotiable."

Colin Scotland, AI Integration Coach

My first reaction? He's absolutely right. For most coaches using consumer AI tools, "never" is exactly the right advice. It's the safe default. It protects clients. It honors our professional ethics.

But here's what kept me thinking: Colin's stance assumes all AI tools work the same way. They don't.

When you type something into ChatGPT's free version, your data might be used to train future AI models. That's a hard no for client information. But most coaches don't know that ChatGPT is fundamentally different from tools that use OpenAI's infrastructure.

This isn't about defending any particular tool (including ours). Understanding the landscape helps you make informed decisions instead of fear-based ones. When we say "never use AI" without understanding the technical differences, we might be avoiding tools that could genuinely help our clients while fully protecting their privacy.

The Distinction That Matters

Most coaches don't realize there's a massive difference between consumer AI services and professional tools built on AI infrastructure.

Think of it like the difference between posting on social media and sending a secure message through your bank's website. Both use the internet, but they handle your data completely differently.

Consumer AI tools (like ChatGPT's free version, Claude's consumer interface, or Google's Gemini):

  • Your conversations may be stored indefinitely

  • Your data might train future AI models

  • The company can often read what you type

  • Privacy policies can change anytime

  • Colin is 100% right—never put client data here

Tools using AI Infrastructure (like many professional coaching platforms):

  • Your data is processed and immediately deleted

  • Nothing you send is used for training AI models

  • Data exists only during the brief moment of processing

  • Bound by strict data processing agreements

  • Designed specifically for sensitive professional use

OpenAI makes this crystal clear in their API documentation: "[…] data sent to the OpenAI API is not used to train or improve OpenAI models (unless you explicitly opt in to share data with us)." (Source: OpenAI Platform Documentation)

This isn't a minor technical detail. It's the difference between leaving client notes in a public café and keeping them in a locked filing cabinet.

When a tool like JourneyLoop (or any professionally-designed coaching platform) uses OpenAI's API, it's not the same as you typing into ChatGPT. The data flows through secure channels, gets processed in isolation, and disappears. No training. No long-term storage by the AI provider. No human review.

Not every tool that claims to "use AI" follows these practices. Some are just fancy wrappers around consumer AI. Others truly implement professional-grade privacy architecture. As coaches, we need to know how to tell the difference.

Your Questions Matter More Than My Answers

This probably raises more questions than it answers. Good. That's exactly where we should be.

You might be wondering:

  • "How can I verify what a tool actually does with my data?"

  • "What about tools that don't use OpenAI - how do I evaluate those?"

  • "Even if the AI provider deletes data, what about the tool itself?"

  • "How do I explain this to clients who are worried about AI?"

  • "What specific questions should I ask vendors before trusting them?"

These are exactly the right questions. And honestly, they deserve more than a single blog post can provide.

That's why my next post will share a simple framework for evaluating any AI tool's privacy architecture - a guide you can use for any tool, including our competitors. Because the coaching profession needs this conversation.

Here's what I want to know from you: What specific privacy concerns keep you from exploring AI tools? Not the general worry, but the specific scenarios that make you hesitate. Is it about session notes? Client emails? Coaching patterns? Something else entirely?

Your questions and concerns will shape the evaluation framework I'm developing. I want to give you the knowledge to make informed decisions that align with your values and protect your clients.

Drop a comment or send me a message. Tell me what would need to be true about an AI tool's privacy architecture for you to even consider it. Or tell me why "never" is still your answer. I respect that position completely.

The coaching profession is evolving. AI will be part of our future. The real question is whether we'll approach it with fear or understanding. I'd rather we figure this out together, with full transparency and respect for the sacred trust our clients place in us.

What's your take? Are you in the "never" camp, the "curious but cautious" camp, or somewhere else entirely?

If you're a coach committed to continuous improvement and curious about AI in your practice - I'd love to hear from you. Share your questions, challenges, or insights at [email protected]. And if you want to see how reflective practice can deepen your coaching work, visit journeyloop.ai to learn more about JourneyLoop

Keep reading

No posts found