Podcast
2
min read

Why Skills Beat Tools in AI

Published:
January 27, 2026
Updated:
January 27, 2026
Table of Contents
Never Miss an Episode
Listen Now on

Enterprise AI strategies rarely fail because the technology collapses. They fail because adoption never takes hold.

Tools get purchased. Pilots get announced. Dashboards get demoed. Then, slowly, AI stops showing up in real work.

Ben Tasker has seen this pattern across healthcare systems, universities, and critical infrastructure organizations. As a Senior AI Strategist at National Grid, he leads workforce-scale AI education and transformation programs designed to prepare tens of thousands of employees for AI-enabled work. Earlier in his career, he built applied AI programs in higher education and developed AI systems in healthcare where mistakes carried real consequences. Across every environment, the lesson has been consistent: AI strategy lives or dies at the human level .

“This is now a human-centric problem. It’s not just a data-driven problem anymore. This isn’t a simple SQL query. How do we train people to enter the data? What do they do with the model results? How do we train the business to react better?”

That single framing explains why so many AI initiatives stall without ever formally failing.

AI Readiness Is About Skills Embedded in Work

When Ben talks about making 36,000 people “AI-ready,” he’s not describing a mass rollout of tools or a company-wide prompt training. He’s describing how work itself changes.

Some roles will be replaced. Many will be augmented. Most will evolve. Marketing, analytics, and operations all fall into the augmentation category.

“Most jobs are going to be augmented by AI. You’re still going to need people to market to people. AI can help enhance some jobs, some jobs will change forever, and some jobs will fall into different buckets.”

Where AI strategy breaks down is when leaders treat readiness as a procurement problem. Buying AI does not create capability. Skills do.

Ben pushes organizations to think in systems, not point solutions.

“A lot of companies think of AI as a point solution. That can be disastrous.”

When AI is bolted onto workflows, it feels optional. When it’s embedded into tasks people already do—drafting, summarizing, scenario planning, pressure-testing decisions—it becomes useful. Useful turns into habitual. Habit is what makes AI strategy durable.

This is why Ben emphasizes learning inside the job. Not separate from it. Not after hours. Not as a one-time enablement push.

Trust in AI Grows Through Interaction, Not Mandates

You can tell whether an organization trusts its AI strategy by what happens after leadership stops talking about it.

In stalled transformations, executives announce AI success and move on. Everyone else tolerates the tools until they quietly disappear from daily use.

“Executives are just putting in AI and then walking away. It’s not working. People couldn’t interact with the AI. They didn’t know how it impacted the organization. Maybe there was some data concern the company didn’t anticipate, and that’s how that point solution goes kaput.”

Ben sees trust forming only when AI is safe to experiment with. Low-risk. Conversational. Integrated into familiar work.

Scenario-based training. Draft reviews. Simulated customers. Iteration without consequence.

“Learning comes from conversations. By getting people to interact with the AI, they start to use it in other aspects. That might not seem like learning, but it’s reinforced learning.”

For marketing and data teams, this distinction matters. AI delivers leverage when it sharpens thinking—challenging assumptions, accelerating iteration, revealing blind spots. When positioned as a replacement for judgment, adoption stalls. When positioned as a collaborator, usage compounds.

Trust isn’t declared. It’s earned through repetition.

The Only Sustainable AI Strategy Is Skill Mapping

Ben’s most practical advice doesn’t sound futuristic. That’s exactly why it works. Stop evaluating AI strategy at the tool level, and start at the skill level.

“If you map skills across the organization, you can see the gaps. That works like a GPS.”

Skills like systems thinking, analytical reasoning, responsible AI use, and communication age far more slowly than platforms. They also prevent organizations from buying AI they don’t need.

Before adopting any new AI capability, Ben pushes teams to ask one question:

“What are we actually trying to get done? Do we even need AI? Sometimes this is a simple click of a button. Everything’s a trade-off.””

Sometimes the answer doesn’t justify AI. Sometimes the risks outweigh the upside. That restraint is strategic maturity, not hesitation.

“The real cost of AI isn’t just the application. It’s the cost of what happens if it goes wrong.”

This framing forces better decisions—and tighter alignment between marketing, analytics, and leadership.

AI strategy fades when skills don’t keep pace with ambition. Tools come and go, but systems and skills endure. And trust only forms when people are invited to learn their way into change.

That’s the difference between AI initiatives that launch, and AI transformations that last.

Listen now: Spotify | YouTube | Apple

Connect with Scott: LinkedIn | Website

Subscribe to What Gets Measured for more conversations like this.

Transcript

Ben Tasker WGM episode INTERVIEW

[00:00:00] 

Jessica: Ben Tasker. Welcome to the show. We're so happy to have you here.

Ben Tasker: Thanks for having me, Jess

Jessica: Cool. So we're gonna jump right in. Um, very impressive background and you've been immersed in AI transformation for quite some time. So wanted to start with, um, asking you, you've led AI learning initiatives across huge organizations and in academia, but what drew you to focus on AI education specifically and workforce transformation in the first place?

Ben Tasker: Interestingly enough, it's really my own upskilling and reskilling that led me to academia and helping the workforce transform to be more AI ready. When I first got my bachelor's degree, it was actually in a healthcare setting. I thought I wanted to be a healthcare administrator. It's a very data-driven role. I went in on my first aid. This is a little naive at the time, but this is what it was like to me. [00:01:00] I thought I was gonna change all of healthcare. One of the first projects that I got was. Figuring out physician demand for outlying offices. And I was like, okay, this is, you know, this isn't that long ago.

It's a little long ago, but not that long ago. We have data, we have an electronic medical record. can access this, this, this can't be that bad. I go in, get, get access to the data, find out that the data's actually not in the electronic medical record. I have to, in most cases, go out to the physician offices. They either have it on memory, on paper or stone tablet, whatever the case may be. It's,

Jake: real quick. Before you said stone tablet, give, put us in time. When was this?

Ben Tasker: it was about 10. It was about 10 years ago.

Jake: Okay. I just

Ben Tasker: Yeah. No, not that long ago.

Jake: Not stone tablets, but, but baby. Okay. All

Ben Tasker: Yeah, potentially. So it was like, okay, so this is now a human-centric problem. This is a transfer transformation problem. How can [00:02:00] we a train the physicians and clinical staff to enter the data so that it's accurate and we don't have to do this every time? And, uh. really impacts patient care potentially. This is now a human-centric problem. It's not just a, a data-driven problem anymore. This isn't a simple SQL query. And that then led me to go get my master's in data science. That's where I learned the AI theory, the background, and that's, I kept seeing the same problem over and over again.

And it was really that, what I call like an information gap. We're gonna try to implement some technology, whether it's data, ai. Some other cool thing, but we're not really going to potentially know how that impacts the business, potentially even train the business. And through my own education, I understand the impact of education at scale because I was able to have these conversations and understand some of these complexities.

Complexities able. To, to solve that problem, right? Because, [00:03:00] 'cause this is transformation, but I, I don't think a lot of folks think of it like transformation. And, and especially like data scientists, especially, I, I lean more technical and I've gotten better over time. But most of the problems are we need the data and we can make you a model or we'll collect, you know, all this data over a bunch of time. We'll analyze it, we'll find the gaps and then we'll make you a model. And that's all great. But this is more work shifts, like how can we. Train people to enter the data. What do they do with the model results? How do we train our business to react better? So it was through my own experience, I kind of landed in academia.

I then kind of landed into workforce just, just through my own, um, liking it. It wasn't intentional. I can't say. When I was seven, I was like, oh, I wanted to lead a data and AI Academy or be a dean of ai. The jobs didn't exist. I, I wish I could say it, but I couldn't say it.

Jessica: I love how you came full circle to the problem, though, starting with people, kind of a human-centric, like interest in healthcare, [00:04:00] shifted to the data and then came back full around, into, you know, it's, it's actually humans at the core of all of this that need to learn and underst. Um, and so with that in mind, you were responsible for helping 36,000 human people become AI ready.

Um, so just tell us a little bit about that experience and what does that actually mean, and then how can, uh, the folks that we talk to, uh, like in the marketing and agency world, um, what can they take away from, from that transformation?

Ben Tasker: So most job, hopefully this isn't new, but most jobs are gonna be what I call augmented by ai. So AI can help enhance some jobs. AI is gonna replace some jobs, and then jobs are gonna completely change forever. And jobs. Some jobs fall into different buckets. For example, a call agent, that job could be an example of something that gets entirely replaced by ai, even though I do think ai, um, and people will still be needed in that role. Uh, [00:05:00] large majority of folks think that that role can be replaced by AI accounting, potentially. Um,

very numbers, heavy, entry level data science jobs could be replaced by ai. Then there's the AI augmentation. So the digital marketers, I still think that you're gonna need people to market to people, and then you have the human-centric jobs. Uh. I'm kind of against the grant on this one, but teaching, firefighters, police officers, clinicians, I still think, not just because it's human, but because there's a lot of bureaucracy around it too, that it's just gonna be hard to replace all those jobs with ai. So by, by enrolling this upskilling program and, and thinking about it from that, that state, uh, the next step is. Thinking about AI as a system. So a lot of companies e especially the ones that have failed implementing ai, uh, think of it as a point solution. So we're just gonna put an AI chat bot. We're not really gonna get our customer service agents [00:06:00] to work with it, train them on it, get their feedback on it, and then, you know, we're gonna call that a business day. can be disastrous. So you have to really meet the, the people where they're at. So how can the customer service agents, for example, use AI to train for their job?

Very simply, you can put the AI into a role of a dis uh, uh, very, um, offset customer.

you can get.

them scenario based training with ai. That might not seem like learning, but it's a, it reinforced learning because by getting them to interact with the ai, they can, they'll start to use it in other aspects. And I truly believe that individuals. That may be back, be impacted by AI are gonna still need to exist because who's gonna check and balance? The ai, uh, I mean, right now we're seeing, you know, at a high level, executives just putting in AI and then walking away, it's not working. So as the AI gets better, as it moves faster, as the risks [00:07:00] increase, I don't see that diminishing.

I actually see it becoming more of a complexity, which then leads me to believe, yes, I, some jobs will be eliminated for specific roles, but over time I think we'll actually see more jobs mo more, uh, careers will be augmented by ai. Uh, and some of the jobs like prompt engineering a few years ago, a real job.

Now I think we're, my job is a, is a real job now. Um, we're gonna see more of those jobs open..

Jake: So, so thinking then you were talking about upskilling and then getting. Uh, systems thinking more in systems lesson points, and I was like, Hmm, you make a good point there. Um, but you, you taking 36,000 people, I mean, a lot of people are walking away from that point solution like they've done before. This isn't anything new.

AI hasn't caused you. It's like, Hey, I'm, we're gonna, we have an email tool for this. Like, marketers have experienced this before, but [00:08:00] getting the knowledge to everyone. Like that's, that's another trap that I think, you know, you highlighted with data practitioners, their thinking ends with the model. Um, so how do you get more people knowing things?

36,000 people AI ready? I mean, come on. How, how, how did, how did, how do you speak to larger workforces? How do you get these singular things at that system level? You know? Hmm.

Ben Tasker: So I think that's where the transformation aspect really comes into play and, and historically I've, I've usually overlooked that aspect. Sometimes it's called change management in organizations, but it's really taking job function skills. Then kind of going back out to the business and saying, Hey, did you know you can complete this one task that most people really hate with AI pretty easily with, with, with a couple prompts.

And it sounds simple, but it blows people away. And then they're like, okay, I'm in this tool now. What else can I do for me? And I'm like, it can do this. They can do all, [00:09:00] all these different use case examples. It's also looking at the organization and, and levels of tiers. So you have your leaders. example, the people that are are running the company, they all have a little bit different of a use case.

Maybe they need to do some SWOT analysis. Maybe they need to create an email with a specific tone. Maybe they need to use AI to report on things happening outside the business that may impact the business a little bit more quickly than they normally would get those updates. Those are all examples of, of prompt engineering and setting up good AI agents.

But you have to teach folks to be able to do that. The next tier would probably be your specialists. So the individuals that are already in the technology field, they have a lot of certifications and training already. They probably are like, yeah, we've known AI for years. How are they using it in their coding?

How are we testing that the AI is responsible? Are they incorporated into the [00:10:00] what? What's called the ethical and responsible AI framework? Is there some sort of, if something happens at the company, who are we contacting a about that thing that's happening that can help solve the problem? they spreading that knowledge throughout the rest of their organization? does that mean? Having committees with different, uh. Different departments on the committee so that we can knowledge, share and learn from one another. And then that kind of sparks like, oh, like these people are really advanced, but this really advanced application of AI I can use for this simple task over here and I didn't even know it existed.

And then you have the generalists, which is pretty much everyone else that's not a specialist in your organization. And they'll have. Similar use cases, but a little bit different. So a finance department might be using AI and to combine a bunch of Excel sheets to produce performance. So what went from a manual task what

is now a

calm. automated and then you could

double [00:11:00] check their work like they normally would.

They're using the, the regular systems. Uh, you might have. Administrative assistants using AI to take notes, uh, and to help prepare summaries of, of the calls they were just on. Yes. That's, that's, that's an example of AI enabled because they weren't doing it before with, without the ai. So it's making those little impacts and, and the transformation over time. It's not, um, a plug in the AI one and done. This is a multiple year journey. There's. of consideration and it really needs to be spoken at all levels regardless of the different levels in your organization.

Jake: just, you just hit something on me. I was like, it's not transformation. Evolution. You know, like Caterpillars hearing about the transformation to butterflies are like, what? You know, it happens. It happens

Ben Tasker: That's awesome.

Jake: day, you know, and you're like, no. It's a process. Join us for this journey. Another point that I wanted to say, learning comes from conversations.

[00:12:00] One thing that you're hitting on is conversations being the key. And this goes to my next question, which you've worked, uh, with organizations that have AI systems that are at varying levels of, um, know. Goodness, you know? Thank you for saying that diplomatic way to say it. How, when you come into an organization, can you tell how, if they genuinely trust the AI they're working with, you know, from the generalist specialist to all the technical levels?

Or are they just like tolerating it because leadership told them to, again, just point, click, I'm walking away, like, does that trust come? And is trust versus toleration. You know what I mean?

Ben Tasker: Yeah, I do know what you mean. I, I think it depends on the organization. Some organizations, large banks. firms there, there is a forced AI stance, and that's because [00:13:00] they're trying to implement these, these point solutions, but call them system solutions. 95% of AI implementations, at least in, in 2025 failed.

So what that means is that. Companies are spending all this money kind of overlooking the training aspect of everything, and then that that point solution goes kaput because people couldn't interact with the ai, didn't know how it impacted the organization. Maybe there was some sort of AI route. Breach or, or data concern that the company didn't anticipate, even though that should be anticipated. Um, and, and then that's how that fails. But in a, in an, in an optional state, you're still trying to create what, what's called a learning culture. So yes, this is, this is really not that new. Um, but you're also in, in tandem with creating a learning culture. Create what's called a skills-based organization. So there are some skills-based organizations out there, but typically jobs are, are tied to skills. But those skills are taught tied [00:14:00] to what's called

description. So for example, data scientists might need a master's degree, a PhD, understand coding, Python, for example, model implementation. The list can go on and on. managers might need experience in waterfall management. They might need experience updating stakeholders. Now, those are all examples of skills. If you can think about AI in the sense of skills as well. Prompt engineering, responsible AI ethics. there is some coding and modeling in that. If, if you're more in the the advanced cat or intermediate to advanced

Um.

But those skills by, by placing all the skills that your organization has, you can actually see what's called skill gaps. And then the skill gaps are actually where you need to train. So you don't need to. Place out every single job description within every single job domain. It gets really confusing that way.

By looking at it in a skills framework, you can understand where the organization needs to go. It works like a [00:15:00] GPS in the next five years. It looks like we have a, a huge lack in skill X, Y, and Z. We can build that into annual reviews. We have

program or offerings to help upskill our employees, and then we can bake in the AI for, for the job efficiencies.

Into that. So for example, you start learning how to write with AI and edit and brainstorm, and then eventually you get to building your own AI agent, uh, within some applied AI tools. Maybe you vibe code, if you're on the co coding team, which is when you describe it to ai, what you want in, in a coding application, it tries to produce it for you. Um, having prompt aons across the organization. So these learning opportunities can be fun. They don't have to be academic, and it's, it's more of an upskilling journey for the organization, which. Is relevant anyways because your organization needs to stay relevant, especially as things happen faster.

Jessica: That skills, uh, mapping and gapping, if you will, to [00:16:00] figure out where the gaps exist, um, is one way, uh, organizations can start so that they aren't heading into or buying AI tools perhaps that, you know, they don't need. Um. But what are some other ways? I mean, we're just flooded right now. Everybody, marketers, every industry, I feel like we're flooded with hype and AI tools.

So what are some ways that you, uh, help people think about cutting through with all that noise and deciding if something's actually valuable? And I think that, you know, that skills gap exercise is maybe a piece of it, but are there other ways, um, people can kind of just distill down what, what is actually going to be valuable to their organization?

Ben Tasker: So the first question you should always ask is, what are we actually trying to get done? Sometimes an email automation might, well, you're laughing, Jake, but I think people.

Jake: it hurts 'cause I know I've been there. I, I am laughing at myself.

Ben Tasker: So, so what are we trying to do with ai? Do we even need ai? [00:17:00] Is it an email automation? So, for example, some folks are like, I want AI to search my inbox and then move this stuff to spam. I'm like, well. Yes, we can do all that and we, we can teach you all that, but do we really, this is a, like a, a simple click of the button and, and we can do that.

So they're like, oh, okay. So it's, it's right sizing it. And we also have to tie these. These nuances to

to, to cost these cost tokens. There's training involved with it. could be impacts. What if you remove the wrong email in this context? So it's understanding that everything's a trade off. Even more advanced projects are, are a trade off.

And then it's collecting the data, making sure that you can prove your data driven decision making. So if we're. We're gonna try to become more efficient, or we're trying to show that we're actually meeting our skills. Uh, we can progress that over time. And then the real cost of AI is not just the cost of the application, but it's also including the [00:18:00] cost of risk. So what if the AI leaks a very important email wasn't supposed to be leaked? What does that, what does that cost the company? So there needs to be some scenario planning. If all those things are neutral or land in what I would consider a good category, so not negative, like a, a good category, the AI is probably worth implementing if, if the risks are negative, serious consideration.

Jake: Well, I, uh, you, as you were speaking, it reminded me of, you mentioned earlier when a call center person was using data, or, I mean, using AI in an indirect way to reinforce learning. And so I was like, oh. Uh, how is it

speak a little bit more maybe about like,

worried ke

isn't going to, you've

.

the rubric that you just said, cost analysis and all of this benefit thing, but then, for more of

those [00:19:00] Okayhere some way that they can reinforce learning? And have an AI that sort of do, do you see what I'm saying? Like, I mean, it's like AI won't help you do it, but it might indirectly benefit you if you use it in the right way to reinforce the right learning. I.

Ben Tasker: I think you have a great point there, Jake, and it's not just using the ai, but that then leads to further learning. So what if that customer service person or a project manager or even a data scientist is like, you know, I really love this applied AI technology. I like the chat bots. I like the prompt engineering.

I'm getting really fluent in it. And I'm gonna become Role X, or, you know, get, get a bunch of skills and, and try to shift my, my job career. I think that's the beauty of upskilling and re-skilling in an organization. Instead of trying to map

descriptions, by creating that learning culture, the change management, the transformation individuals will learn those skills and [00:20:00] whatever the world has in store. your organization, the employees can kind of adapt to it instead of a new AI coming out, oh, we need to go learn this now. Oh, we, if you're always learning, you don't really, it's, it's baked into the recipe.

Jake: Right. Or always experimenting too. I was like, well, you could probably not stop, but what you said was really interesting. It isn't capabilities, it's skills, skill gap, skill identification, skill mapping, and gapping, like Jess was saying. Um, uh, so, so anyway, moving on. Love that little tangent. Um,

Jessica: It made me, can I just say one thing? It made me think of something, um, that we, you know, we even learned in our journey, Jake, is that, you know, first it was like, oh, great, LLMs are here. They can just write marketing content for you. I mean, that's not the ideal use case though. Sure. It could, but like, where it come becomes most, uh, valuable is using the LLM too.

You know, poke holes in your idea before putting pen to paper and writing the post. Or, you know, [00:21:00] helping outline angles that you hadn't thought of and kind of using it to like, reinforce, um, learning about the writing process all over again. Almost using it as an assistant as opposed to like, wholesale is doing the job of the copywriter.

No, it is, um, assisting you in thinking harder and better about the task to be done.

Jake: For me and you, like you, we were editing a lot. We were like, like, writing isn't the problem. Editing is, you know what I mean? We can

Jessica: Hmm.

Jake: create stuff. And it was cool because the skill of editing, when you use a, a tool, right, it's great. It's a big benefit, but it, it's. That's a good aside. So I, but I, I wanna speak really, you know, you got a data science degree, uh, Ben, you're up there with your degree, um, talking about your data science. How do

Ben Tasker: Skills.

Jake: skills? Sorry, um, you're so skilled. Uh, so, but you're up there. How do you get those [00:22:00] general folks thinking about data literacy, um, and people who aren't numbers, people. Which is a lot of us, I'm not a math guy, but how, how do you get people literate looking at the right thing if they aren't number people?

Is it skills again?

Ben Tasker: It, it honestly, it does come back to skills, but it's meeting individuals where they're at. So maybe the finance department really does like the numbers, so picking up some more efficiencies around how they do their work isn't a big, a big lift for that group.

That.

also maybe, um. More human centered managers, so individuals that are out on the

Officer together,

more blue collar workers.

How could you teach them to use numbers to better serve their staff? Well, maybe you could do scheduling optimization. Maybe you could do, uh, automated weather reports. So yeah, those aren't huge AI use cases, but AI can help with that. And if you can automate and [00:23:00] solve a small problem.

my.

These individuals will keep coming back and wanting more to solve a bigger problem. And eventually we'll realize, oh, this is, this is why we are, we're doing this instead of a a, a top down approach. It's a bottom up.

Jake: Hmm.

Jessica: That makes a lot of sense. And this is related to that question that Jake just asked, but you know, as marketing as a discipline becomes even more and more data dependent, what advice do you have for teams who are trying to level up their data storytelling, uh, skills using ai?

Ben Tasker: So data is the oil to AI's engine. They, they go hand in hand. So if, if the data capability slash maturity isn't that high at your organization, AI's probably gonna be ranked even lower than whatever, whatever that score is. So you have to figure out the upskilling together, and that's really why I like to think about skills in [00:24:00] two sense of.

Uh, cents. Context approach, which comes from the World Economic Forum. Instead of calling them data and, and all these different skills, really calling them AI skills is what the World Economic Forum calls them. And that is systems thinking, analytical thinking, sometimes maybe coding, but not every technical domain needs the coding.

And then they have this other swim lane called human skills and, and that's really beautiful because the human skills are uniquely human. AI can resemble

up this.

but. For example, they can act like they have empathy, but

That's.

acting like they have em, em, empathy. They don't really have empathy. So the human skills will be empathy, communication, leadership, project management, data-driven decision making, even though it, it is technical and you do need to understand numbers a little bit, you can tie it back to those people functions.

We're, we're trying to reduce X, we're trying to make our employees understand y. Um, it's really a, a map and then not only [00:25:00] are you teaching these technical skills, but you're also teaching the human skills. And right now, maybe the human skills are a little bit undervalued, but over time understand that they're gonna be more valuable because AI is gonna be able to do a lot of things at your organization.

So we're really gonna need good people, leaders to continue working with the people.

Jessica: I think you just answered my last question because I'll ask it anyways. Um, you know what I, the question is what trend, uh, in AI and learning do you think most people are underestimating right now? And I think you just said those human uh, skills, but I, I'll put it to you. Is that the biggest thing you think that is, um, underestimated or, or missing right now?

Ben Tasker: I, it is a concept, like one of the avenues I think is missing, but I also. Need folks to understand that AI is a recommendation engine, and there's these concepts called a IO and A EO, AI optimization, and then AI engine optimization. [00:26:00] So AI optimization is getting the AI to recognize. Potentially you exist.

So us doing this podcast, there'll be a transcript. The transcript goes somewhere. Maybe, maybe the AI will, will recommend this podcast. The a EO is the actual recommendation. So the AI system recognizes this data and, and can, can spit it out. Uh, small businesses, medium sized businesses, individuals. Search engine optimization's going away. Uh, it's not gonna go away instantly, but most people in the future are just gonna open an AI app. It's gonna make three recommendations, highly personalized. If you're not in the recommendation, might not be selected. It's something seriously to consider about. It's one of the, I think, a high risk, high reward type scenario for a lot of individuals. It can come down to personal brand, it can come down to making simple LinkedIn posts, making sure that you have a good [00:27:00] website set up for the AI to associate with. Uh, but those, those two terms, I think people need to walk away with and understand that this is only getting better. It's gonna become more mainstream. And it's, it, I don't wanna say it is the fu the future's already here, so, so don't get left behind.

Jake: Okay. And right here in the present moment, just been listening to a very helpful podcast. you're in ai, you've just been reading our transcript. Um, but if people wanna connect with you, Ben, uh, and learn more about AI readiness and all of the data goodness that you're sharing, how can they connect with you and learn more online?

Ben Tasker: They can find me on LinkedIn, Ben Tasker, or they can reach out to me on my website, ben tasker ai.com.

Jake: Boom. Take that search engine or AI or whoever you are, okay? I'm not gonna let you go until we play a game called cheese or chocolate, where I ask you two questions, two options, and you have to choose [00:28:00] one. Uh, are you ready?

Ben Tasker: Yes,

Jake: Okay. or chocolate.

Ben Tasker: chocolate

Jake: Hmm. Okay. Let's go on umbrella or raincoat.

Ben Tasker: data scientist umbrella.

Jake: I don't know why I asked that. What do you think, Jess? Were you shocked? I mean,

Jessica: I don't know, ideas. Is that like the data official data science answer? You should always go.

Jake: know they have the, do they have

Jessica: Is that backed by data?

Ben Tasker: rain. That was my first lesson in my master's

Jessica: Uh.

Ben Tasker: There's, there's always a probability, so always bring an umbrella

Jake: This is some, uh, hitchhiker guide. Always bring a towel. Okay. Um, let's go Noodles or rice.

Ben Tasker: Noodles.

Jake: What do you think, Jess?

Jessica: Yeah, I'd probably go noodles.

Ben Tasker: There's more use case.

Jake: but there's, there's a lot of weird

Jessica: Oh yeah.

Jake: there's only one kind of rice, two kind of rice noodles. Boy, there's so many[00:29:00] 

Jessica: There's not just one kind of rice, what

Jake: Okay. I'm sorry.

Jessica: so many kinds of rice

Jake: Sorry. There's a broad brush

Jessica: that didn't make it

Jake: there's one rice.

Jessica: okay.

Jake: Disregard. We're moving on In the gym or outside? Yeah. Nice. Lo uh, a localized place. What do you think, Jess?

Jessica: I am an outside person if I can.

Jake: If

Jessica: I mean, not in Chicago in January. It's like negative 13 degrees today, but you know,

Jake: running around.

Jessica: in spirit, I'm outside.

Jake: I think in spirit I'm outside, but I'm most likely in the gym, uh, kazoo or slide whistle.

Ben Tasker: That one's interesting. Maybe a slide whistle.

Jake: Yeah. You thinking about it, you are like, not sure. What do you think, Jess?

Jessica: Oh yeah, I agree. I think that side whistle, um, yeah.

Jake: It has more character

Jessica: Bolder sound. Yeah.

Jake: Stronger journey there. Okay. And [00:30:00] finally. Maybe most contentiously breakfast in bed or no food where you sleep.

Ben Tasker: No food where asleep.

Jessica: Agreed. There's only one answer to that.

Jake: Why is breakfast in bed so famous? You know, I'm gonna bring you a whole bunch of crumbs into your bed. Get outta here. You know what I mean? All right. That

Ben Tasker: I agree.

Jake: that wasn't the right answer, but it was the right answer. All right, well done. All right. We did it. Hold on, let me pause this thing. Um.

Related Blog Posts

View all
AI Agents

Is your AI Listening? Why Question-Led Intelligence Outperforms Static Answers

Paul Deraval
January 22, 2026
Podcast

When Data, AI, and Decisions Meet

Team NinjaCat
January 14, 2026
Case Studies

How VML Uses AI Agents to Drive Creative and Commerce Performance

Team NinjaCat
January 14, 2026