Unlocking AI Trust: My Journey into the World of Data Annotation
By Luis Loaiza Ventosa, Conectys’ Vice President of Sales
When you’ve been in sales for a while, whether in Trust & Safety, Content Moderation, or Customer Experience Management, you get used to a certain rhythm. You know the personas, the pain points, the pitch. You understand how to translate operational excellence into bottom-line impact. Then, you step into Data Annotation & Labelling, and the game changes.
That’s exactly what happened to me over the past few months. As a sales enthusiast, I’ve always been fascinated by tech-enabled services and how we help global companies deliver real, measurable value.
But getting hands-on with our data annotation work? That opened up a whole new chapter.

Here’s what I’ve learned so far.
Lesson 1: You’re Not Selling a Service. You’re Selling Trust in AI.
Data annotation isn’t just about placing boxes around images or tagging transcripts. It’s about giving AI the fuel it needs to function ethically, accurately, and efficiently. Yet 70% of top-performing enterprises still struggle with AI data-quality gaps that hinder the effective use of AI at scale.
Here’s what I’ve learned so far.
Why Clients Buy More Than Just Annotation
Our clients aren’t just buying a service. They’re purchasing certainty. They’re betting that our annotators, our QA processes, and our domain knowledge will train their models the right way, ethically, efficiently, and without cutting corners.
That’s why more companies are turning to data labelling outsourcing partners who combine scale with deep domain expertise.
In CXM, we talk about empathy and tone. In Trust & Safety, it’s about consistency and compliance.
In Data Labelling, it’s precision, context, and human discernment – at scale, under pressure, and with real-world impact. That’s the essence of human-in-the-loop AI, where machines learn better because people stay involved.
That’s a different value story and one that demands every seller to go beyond features and deeply understand the why behind the data.

Lesson 2: The Tech Conversation Is Only Half of It
I’m a bit of a tech nerd, and I love exploring new tools wherever they come up. Lately, there’s been an explosion in annotation tools and automation frameworks, which means there’s a temptation to lean too hard on the technical side. But when blowing just 20% of your labels can tank model accuracy, you quickly realise it’s not just about the stack—it’s about the structure.
More Than Tools: Selling Nuance in the Age of AI
Annotation tools, automation, LLM training data pipelines, edge cases, and QA thresholds. The data annotation tools segment alone is projected to reach $5.33B by 2030.
And yes, those things matter. But what I’ve learned from conversations with AI teams at major tech firms and startups alike is this:
They’re not just looking for tools. They’re looking for individuals who understand the nuances behind data annotation.
This is why I was so excited to come to Conectys, where we’ve built annotation teams that specialise in language nuance, cultural context, and domain-specific tagging.
That human element is our differentiator. In sales conversations, I emphasise how our global workforce and operational maturity make us a reliable partner, not just another vendor.

Lesson 3: Scale Doesn’t Mean “More People”. It Means “Smarter Systems”
Coming from large-scale CXM operations, I’ve seen what it takes to ramp quickly—dozens of people across departments, multiple geos, and high-pressure coordination. And now, all of that is being channelled into one goal: powering data annotation for machine learning.
The Hardest Part of ML? The Data. Not the Model.
But annotation at scale? That’s a different animal. It’s not just about adding people.
It’s about ML Ops workflows, sampling logic, version control, retraining cycles, and integration with the client’s ML ops.
It’s a huge endeavour and one that can strike fear in the hearts of even the toughest clients and most experienced delivery teams.
Especially when 80% of an ML project’s time can be spent on data preparation and labelling, and yes, that means model-building is the “easy” 20%.
That’s been one of the most exciting parts of my journey. Working with our delivery and tech teams to understand how we design annotation programs that grow with our clients.
We utilise smart workflows, rigorous QA, and our proprietary annotation frameworks to ensure that data isn’t just labelled quickly – it’s labelled accurately.

So, what should a sales leader keep in mind when stepping into this world?
Here are my top 3 takeaways:
Final Thoughts
Entering the data annotation space has been one of the most rewarding pivots in my sales career.
It’s not easy. The learning curve is real. However, it’s where some of the most exciting innovations in AI, CX, and knowledge bases are currently happening. If you’re in AI sales strategy and looking for your next challenge, this might just be the frontier for you, too.
Conectys is proud to be part of the AI revolution – not just building the future but labelling it, line by line, with care and context. And I’m lucky to be selling that story every day.
Let’s connect if you’re curious about how human-powered annotation can give your AI the edge it needs. We’re a trusted data labelling partner helping build the future of AI with care, context, and integrity.
Data Annotation & Labelling: the Numbers You Can’t Ignore
$17.1B – The projected size of the global data collection and labelling market by 2030, growing at a 28.4% CAGR.
$5.33B – The expected size of the data annotation tools segment alone by 2030 (26.5% CAGR).
80% – The share of an ML project’s time that disappears into data prep and labelling. (Yes, that means model-building is the “easy” 20%.)
Blow just 20% of your labels—and watch your model’s accuracy nosedive.
70% – The percentage of top-performing enterprises still struggling with data quality gaps that block AI at scale.
Sources: Grand View Research, Gartner, Huble, Global Information Inc., Pragmatic Institute, Datasaur, KDnuggets.
FAQ Section
1. Why is trust such a crucial factor when selling data annotation services?
Trust is the foundation of AI success. Clients rely on data annotation not just for volume but for precision and ethical accuracy. Selling annotation means selling confidence that AI models will perform reliably, powered by high-quality, well-understood human-labelled data.
2. How does data annotation go beyond just technology and tools?
While advanced annotation tools and automation are important, the real value lies in human expertise, encompassing an understanding of cultural context, language nuances, and domain-specific details. This human element ensures data quality and meaningful AI outcomes beyond what technology alone can achieve.
3. What does scaling annotation work truly involve beyond adding more people?
Scaling annotation means smarter systems, not just more annotators. It requires sophisticated workflows, sampling logic, version control, and seamless integration with ML operations to ensure accuracy and efficiency at scale.
4. Why should sales leaders focus on impact rather than inputs when discussing data annotation?
Clients care about outcomes, specifically how annotation improves model precision, recall, and deployment speed, rather than the number of people working behind the scenes. Positioning around measurable business impact drives more substantial client confidence and deeper partnership.
5. How does data annotation connect with the broader AI and CX ecosystem?
Annotation touches every stage of AI model training, compliance, ethics, and product performance. Understanding this ecosystem helps sales teams articulate value that extends beyond labelling, making annotation a critical component of trusted, scalable AI solutions.


