
Author: Jarmo Tuisk
I spoke with Andres Kostiv on his podcast Kasvuminutid about what really happens in organizations when they decide to "start using AI." My experience says this quite clearly: most companies do not struggle because of tools, but because of skills, roles, processes, and change leadership. AI is not just a new button in existing software, but a new way of working that forces organizations to rethink how work gets done and who is responsible for what.
For me, the central point of this conversation was simple: AI implementation is not primarily technology implementation. It is a leadership question. If leadership itself does not understand what AI can actually do, where its limits are, and how it changes the nature of work, the whole organization will inevitably stay at the level of surface-level experimentation.
At the same time, this does not mean you should wait for a perfect strategy. Quite the opposite. An organization has to do two things at once: build foundational skills and capture low-hanging fruit. You need both practical pilots and patient change leadership. One webinar or one new tool is not an AI strategy.
TL;DR
-
In my assessment, AI implementation is roughly 70% change leadership, 20% understanding data and processes, and 10% technology.
-
A company should not start with the question "which AI tool should we pick," but with "where does our work actually get stuck today, and what should be redesigned."
-
Skill gaps in organizations are widening. The top performers will find tools on their own, but leaders must help those who would otherwise fall behind.
-
The AI lead role becomes genuinely necessary when AI is no longer experimentation, but an organizational priority. Often a better name for this role is innovation lead.
-
The AI lead should not become the "owner" of all AI projects. The business process owner must remain the real owner of that work even after AI is added.
-
Risks, data protection, and responsible use are real topics, but blind bans on tools do not solve them. Bans usually produce hidden usage.
-
Many old information systems were not built for the AI era. In the future, systems must be designed so the user is not only a human, but also an agent.
-
The true metric is not how many agents are used in the organization. What matters is whether the business becomes faster, more efficient, and more profitable.
Listen and watch the full episode: https://www.futurist.ee/kasvuminutid-ep69
Interview
Andres Kostiv: To start, a bit of context. Jarmo and I share a lot of interest in UX, product management, and AI. We both train organizations, and we both see quite closely how people actually start using these tools, or do not start. That is why I thought it would be interesting to unpack a question today: what is AI leadership, really? If we already have a CMO, HR lead, and IT lead, then who is the AI lead and what do they actually do?
Andres Kostiv: When an organization decides it now wants to use AI for real, what usually happens?
Jarmo Tuisk: The patterns split into two broad directions. One is bottom-up movement, where more proactive people and teams start experimenting on their own. The other is top-down movement, where leadership realizes that AI is no longer a niche topic, but a strategic question. Over the last year, that second pattern has become much stronger.
Andres Kostiv: What is pushing leadership?
Jarmo Tuisk: Competitiveness. The honest answer is that both ambition and fear are in play. There is fear that competitors will pull ahead, but at the same time there is a very practical search for ways to work more efficiently. That pressure is very real in companies today.
Andres Kostiv: At the same time, specialists discover AI tools on their own as well. Does that mean implementation could simply happen naturally from the bottom up?
Jarmo Tuisk: Only partly. Every organization has pioneers who adopt new tools before others. But alongside them there is also a large group of people who do not adopt these changes by themselves. If leadership is missing, skill gaps only widen. That is why the organization has to consciously support those who would otherwise fall behind.
Andres Kostiv: So in the early phase, ideas came more from people and teams, but now AI has reached the boardroom?
Jarmo Tuisk: Exactly. Earlier, it started more with someone trying something on their own, getting a result, telling a colleague, and then the wave spread. Now I increasingly see situations where leadership itself first wants to understand what this is, which decisions are strategic, and what role AI could play in the company's future.
Andres Kostiv: You made the point that the new baseline is that everyone already uses AI.
Jarmo Tuisk: Exactly. I would no longer compare people to their previous AI-free productivity. Today's comparison point is that others are using AI anyway. The question is no longer whether AI gives an advantage, but how well you yourself can use these working methods.
Andres Kostiv: It used to be measured by how well you could Google. Now maybe we should ask how well you work with AI.
Jarmo Tuisk: Yes, and here too different skill levels emerge quickly. Some people still use AI like Google: ask one question and expect one answer. In reality, a much more important skill is being able to run a work dialogue with the machine, clarify, provide context, test different angles, and steer that conversation with intent.
Andres Kostiv: You can feel that yourself too. Some emails or requests are already structured in a way that clearly shows AI helped shape them. I personally do not see a problem with that if the information gets across. But in organizations you can see some people are principled opponents of AI-written text.
Jarmo Tuisk: Yes, there is a lot of emotional relationship with technology there. Outputs are not judged neutrally, but immediately labeled based on where they came from. A typical reaction is: "Look, it made a mistake." But people often forget that some colleagues make exactly the same mistakes. In other words, we still do not have a calm, balanced relationship with this technology. There is a lot of projection, hope, fear, and frustration all at once.
Andres Kostiv: It reminds me a bit of old Word debates. At one point, a handwritten letter also felt somehow more authentic and personal than a document made on a computer.
Jarmo Tuisk: Exactly. Most likely, within a year or two we will not debate much anymore whether a text or input came from a person or from AI support. The question will shift elsewhere. We will look more at where value is actually created. Is it in how a person uses a model, in the system they build around it, or in the human-to-human discussion that comes from it?
Andres Kostiv: But what about skeptics in training sessions? People who feel this whole AI talk is either hype or somehow alien.
Jarmo Tuisk: You have to be honest with them. Skeptics are not convinced by slogans or overhyped sales talk. What helps is honest discussion about limits, errors, and where AI is not a fit. If a person understands you are not selling miracle medicine, but helping them truly understand what this tool is, trust starts to form.
Andres Kostiv: At the same time, there is FOMO on the other side too. Thousands of workflows, assistants, agents, integrations. Constantly feeling maybe I am missing something important.
Jarmo Tuisk: Yes, and that is exactly why it is important not to run after every new workflow. The organizational question is not whether we know thousands of different tricks. The question is whether we understand which skills actually create impact. And here we come back to skill gaps: top performers find tools themselves, but organizational success depends on whether others catch up.
Andres Kostiv: In Estonia, this skills gap has also been highlighted in several reports.
Jarmo Tuisk: Yes. For me, the key distinction is awareness versus application skill. Almost everyone has heard about the tools already. But that does not mean people can use them well in their own work context. A lot of people stay at the level where the chat bar is seen as a Google search box. One fact question, then: "I tried it." The real work actually begins where you start using AI as a way to do work, not just as an information lookup box.
Andres Kostiv: How much of all this is technology and how much is change leadership?
Jarmo Tuisk: I would put it this way: about 10% is technology, 20% is understanding data flow, information flow, and processes, and the remaining 70% is people. That means change leadership, building foundational skills, clarifying roles, and follow-through. If an organization runs one webinar and assumes AI is now implemented, it is mistaken.
Andres Kostiv: So one webinar for 300 people is not an AI strategy?
Jarmo Tuisk: Definitely not. There are very good organizations in Estonia that understand this and run programs that last a quarter or longer. First general baseline, then topic-specific workshops, then practical exercises, then follow-up activities. Adults do not adopt a new way of working in one day. AI implementation is truly a learning process.
Andres Kostiv: If we get practical, where do companies make the biggest mistakes?
Jarmo Tuisk: Very often focus slips too quickly to technology. A new tool appears, and people immediately ask where to attach AI. In reality, you should first unpack the existing process: where delays happen, where there is manual work, where information moves incorrectly, where people keep stubbing their toes. Only then does it make sense to ask whether and at which stage AI truly makes something better.
Andres Kostiv: You can see this well in daily work too. Take logistics or another service process, and people assume we now put AI on top of the entire process. In reality, you should inspect step by step where the process gets stuck and where it is meaningful to make it smarter.
Jarmo Tuisk: Exactly. Mapping the existing process is the key point. And from there comes the next question: is adding AI enough, or does the process itself need redesign? Sometimes the honest answer is that we should not just automate the old process, but ask why we are doing it in that form at all.
Andres Kostiv: So it makes no sense to just pour AI over the whole process?
Jarmo Tuisk: Exactly. Very often the problem is not missing AI at all, but that the process is poorly designed or existing tools are not used well. AI can help, but it does not automatically fix poor work organization.
Andres Kostiv: So sometimes the solution is process redesign, not a new model or a new assistant?
Jarmo Tuisk: Yes. And when a process truly changes, job roles, responsibilities, and expectations for people also inevitably change. There is no point sugarcoating that. Disruptive technology means some tasks disappear, some change, and some new ones appear.
Andres Kostiv: On tool selection too. In a large organization this is often a practical decision. If the whole company sits in Microsoft, Copilot is the first logical step. If you are in Google Workspace, you look there. If you are independent, maybe you choose Claude or ChatGPT.
Jarmo Tuisk: Yes, especially in the enterprise world, Microsoft is a very natural base. They do not even need aggressive selling. They just say: here is the button, click here. For raising baseline skills, that can be enough. Copilot may not always be the sharpest tool in the market, but from an organizational perspective its strength is that it sits in the same ecosystem where your files, permissions, and data security already are.
Andres Kostiv: But showing the button is not enough?
Jarmo Tuisk: Not enough. When we show people in training how to actually work in this conversation-based interface, their eyes widen. This is a new work paradigm. Before, you opened Word and wrote the document. Now you work inside a conversation. At first it can feel strange: is this really work? But this is exactly where the new mindset comes in.
Andres Kostiv: I recognize that too. You teach how to structure an ideal prompt, but you still sometimes slide back into the Google reflex.
Jarmo Tuisk: Yes, that is very human. But there is a major advantage in working with AI: the machine does not get tired. You can go another round, clarify, rephrase, open context, ask for another format, compare versions. Once people adopt that way of using it, results start to truly change.
Andres Kostiv: Is that why the need appears for a dedicated AI lead or innovation lead?
Jarmo Tuisk: Yes, especially in larger organizations. If this topic is simply thrown into the existing IT leaders portfolio, there is a risk the new ball drops on top of the old portfolio. IT leaders already carry many critical systems today. AI adds a transformational change dimension. That is why it makes sense that someone keeps this topic in focus across the organization.
Andres Kostiv: What is the core of that role in your view?
Jarmo Tuisk: First, the organization's capability has to be mapped honestly. What is the current skill level? Where are process bottlenecks? Where could we improve something quickly? At the same time, you cannot stay only in diagnosis mode. Leadership also expects practical examples, pilots, and early results. So the AI lead's job is both strategic and very hands-on.
Andres Kostiv: So on one hand audit and capability assessment, on the other hand harvesting low-hanging fruit.
Jarmo Tuisk: Exactly. You need to handle strategy while showing real things. Teams run proof-of-concepts, pilot assistants, agents, and automated workflows. Not all of these go into production. But at this stage, success rate does not have to be perfect. If some experiments work and one delivers strong impact, that is already very valuable.
Andres Kostiv: Should the AI lead own those projects personally?
Jarmo Tuisk: I would rather not recommend that as the default model. If a sales lead or service lead realizes "this is now the AI lead's project," there is immediate temptation to hand over responsibility. In reality, the business process owner must remain the owner of the change. The AI lead should be more of a curator, support function, guide, and standards keeper, not the person who executes change for everyone else.
Andres Kostiv: So the AI lead should not become the owner of all implementation projects.
Jarmo Tuisk: Exactly. That is one bad pattern I would avoid. If the AI lead automatically becomes the owner of every project, business-side leaders step back. But real change must happen in their teams and in their processes. Accountability has to remain there.
Andres Kostiv: Risks, GDPR, and data protection almost always come up immediately. How big is this topic really?
Jarmo Tuisk: It is a real topic, but it must be handled soberly. People have a completely understandable question: what happens to my data? At the same time, a large share of fears is based on vague assumptions rather than facts. That is why one important part of the AI lead's job is to explain which risks are real and how to manage them. Blindly banning tools does not solve the problem, because usage then moves into hidden channels in people's pockets.
Andres Kostiv: So that is the same shadow AI theme?
Jarmo Tuisk: Exactly. If the organization simply says nothing is allowed, people still use it. They use it on their phone, with their personal account, somewhere else. That does not reduce risk, it just makes risk invisible. Much smarter is to create a framework where people understand what is allowed, what is not, and why.
Andres Kostiv: And at the same time there are fully local or isolated solutions too, right?
Jarmo Tuisk: Yes. If an organization has very sensitive data, local or cloud-isolated solutions are possible today. That obviously requires maturity and understanding of which data flow needs which processing environment. But technologically, this is no longer science fiction.
Andres Kostiv: You also highlighted this contradiction well in the episode: people already save files to cloud, but at the same time say AI is too dangerous.
Jarmo Tuisk: Yes, that is an interesting point. For example, a company takes Microsoft 365, turns on autosave in Excel, file goes to cloud, and that feels normal. But if Copilot is used in the same environment, suddenly someone says no, now data is in danger. Sometimes the question is not the technology itself, but that people do not fully understand what one or another solution technically means.
Andres Kostiv: The quality control question always comes up too. How much should a human remain in the loop after AI output?
Jarmo Tuisk: As long as AI output is consumed by humans, humans must define quality criteria. From there, some quality control can be automated if we know what kinds of errors the system tends to make. But human judgment does not disappear completely. Especially where customer relationships, legal issues, or critical decisions are involved.
Andres Kostiv: A practical example could be retail. If AI helps forecast how much bread or milk a store should order, there still has to be a gate where a human, or at least very clear control rules, reviews that output.
Jarmo Tuisk: Exactly. And this is where one very important basic thing comes in, even if some people find it boring: people need to understand what type of model they are using. If we talk about a large language model, its strength is not demand forecasting. That requires different model types. If people do not understand that distinction, they try to solve tasks with a language model that it is not fundamentally suited for.
Andres Kostiv: So sometimes two hours of AI fundamentals are more important than a quick pilot?
Jarmo Tuisk: Exactly. Then people develop the ability to distinguish where a technology is sensible and where it is not. Otherwise they start using the same hammer for every problem.
Andres Kostiv: And at the same time, there is one thing AI still does not truly automate.
Jarmo Tuisk: Human relationships. I think the more routine gets automated, the more we will value places where value is created between people. This can involve sales, consulting, service, or leadership. Not everything becomes a conversation between a machine and an employee.
Andres Kostiv: The legacy systems topic also came up in the conversation.
Jarmo Tuisk: That is very important in Estonia. Many systems were built in a time when only human users were assumed. If you now want to integrate AI into workflows for real, old systems cannot be endlessly patched with add-ons. We have to start building systems where an agent is also a user. That means new interfaces, different architecture, and ultimately new investments.
Andres Kostiv: So a system has to communicate not only with a human, but also with an agent.
Jarmo Tuisk: Exactly. If your current information system assumes someone clicks through forms on a screen, that is very inefficient for AI. Yes, you can build emergency solutions where an agent effectively takes over a humans computer and clicks through things, but that is a bit like attaching lanterns to a horse to drive in the dark. It is still a horse, even with lanterns attached. What you really need is a separate interface for AI.
Andres Kostiv: That is a good metaphor. So the problem is not only that we do not have AI, but that the old system was not built for this era at all.
Jarmo Tuisk: Exactly. And that means at some point we need to make painful but necessary decisions toward new systems and new architecture. The good news is software development costs are moving down, and these rebuilds are becoming technically more realistic.
Andres Kostiv: This is also a broader topic, not only about individual tools. If old systems were built ten years ago with a different logic, you cannot assume they can be patched cheaply forever.
Jarmo Tuisk: Exactly. If an organization wants to bring AI into internal processes for real, it is worth rethinking IT investments too. The new baseline in software development is different from even a few years ago. Thanks to AI, we can build, test, and rework much faster. That does not make architecture irrelevant, but it makes the rebuild window much more realistic.
Andres Kostiv: So even painful rebuilds are now possible with less friction.
Jarmo Tuisk: Yes, and because of that it is not always sensible anymore to cling to an old system just because a lot of money was invested in it in the past. If a new way of working requires a new type of system, that decision eventually has to be made.
Andres Kostiv: From there we get to measurement. In your view, what distinguishes organizations that have really progressed with AI?
Jarmo Tuisk: The biggest differentiator is leadership maturity itself. Organizations that move forward do not stop at posters or slogans. Top leaders have personally understood what AI can do, where the limits are, how to use it, and what it means for their business. When top leadership has this experience, the learning speed of the entire organization also changes.
Andres Kostiv: How would you measure whether an organization has truly progressed with AI?
Jarmo Tuisk: Not by number of agents or by number of licensed tools. The real question is whether the business performs better. Is work done faster, smarter, and with lower cost? Can people focus more on value-creating work? And one very important signal is whether leadership genuinely understands the topic, rather than using AI only as a slogan.
In Summary
If I had to summarize the whole conversation into one recommendation for leaders, it would be this: do not start from tools. Start from people, skills, and processes. AI can create very large impact, but only if the organization is ready to rethink its work logic for real.
And the second key point: do not conveniently delegate AI as someone else's problem. Leadership itself must understand it, experiment with it, and build its own vocabulary. If that level exists, the learning speed of the entire organization also changes.