19 May 2025

“AI will be the best teammate you’ve ever had” – building GenAI-driven teams

Eric Ignasik

15 min read

How can organisations unlock GenAI’s power to drive productivity and innovation, without falling foul of compliance? 

Michael Anyfantakis, founder of Vigilant AI² and veteran of banking and fintech innovation, believes the answer lies in moving fast and staying safe.

In this no-nonsense playbook, he shares how to structure lean, AI-native teams, avoid shadow AI risks, and turn GenAI into a trusted teammate, not a liability.

The CTO vs Status Quo series studies how CTOs challenge the current state of affairs at their company to push it toward a new height … or to save it from doom.

“With a GenAI teammate, you have somebody that is multi-skilled and capable in nearly every delivery discipline.”



Highly regulated industries always adopt big tech innovations last. When managers can go to jail for compliance issues, it’s understandable why they want to avoid risks such as those related to implementing AI at all costs.

For Michael Antyfantakis, this risk-averse approach misses the opportunity. He believes that instead of avoiding AI, tech leaders should face it head-on because the benefits it brings simply can’t be ignored.  

In this interview, you’ll see how to introduce GenAI into your organization in a safe and sustainable way. You’ll also find out:

how GenAI can reduce agile teams from 9 to 3 people and delivery times from weeks to days,

what the right mindset for introducing GenAI in your company and team is,

why the problem of shadow AI should and can be avoided,

how to minimize the risk of GenAI causing regulatory compliance issues.

But first, find out more about Michael.


About Michael

Seasoned product and technology leader with 25+ years of experience driving innovation and digital transformation across financial institutions, FinTechs, and consulting firms. As co-founder and CPO of Vigilant AI², he's developing an AI compliance platform for responsible GenAI adoption. His expertise spans cloud, SaaS, agile delivery, and AI-powered solutions, with a track record of building high-performing teams and delivering impactful products.

Meet Michael and Vigilant AI²



Eric Ignasik: Hello, Michael. Recently, you’ve had a chance to talk about GenAI quite a lot. This is also a key piece of expertise at Vigilant AI² – a company you co-founded. What are you up to these days?

Michael Anyfantakis: Hello Eric, thank you for having me here. 

I've worked with AI in different modes for the last 10 years, but I think GenAI has brought it to its inflection point. Because it's a general-purpose technology and can be a companion for every employee, it will have a transformational impact on all industries, even bigger than the whole digital and internet wave.

That's why I decided it was a great opportunity to address a big pain point – how to ensure that companies and their employees can use AI in a way that doesn't get anyone in trouble. 

I founded Vigilant AI2 to find a solution to this problem and put GenAI into the hands of every employee to boost productivity without breaking anything.

The foundation for GenAI-driven teams



I want to talk about GenAI in the context of building efficient teams. Let's start with the foundation. GenAI seems like an inherently innovative area that requires a lot of trial and error. How can C-level executives foster a culture of experimentation in such teams without compromising short-term delivery goals?

People need to think of generative AI as an always-on teammate for everybody to help them do their job faster, better, and cheaper. But a lot of executives are very conscious of the risk that this brings and so they tell people, "You can't use this until we get everything right." In my view, this risk-averse approach misses the opportunity.

If you think of GenAI as a general-purpose tool like Excel or even a PC given to every employee, you can always think of new ways of solving problems with it.  But this approach requires you to make sure that you work with AI in a way that’s governed, compliant, secure and responsible, because a lot of people could do the wrong things.

The more forward-thinking CEOs and leaders are telling their employees to use these tools responsibly. They aren’t hiring new people, but maximizing the use of these tools first.

Sounds like you’re talking about the leaked Shopify memo, which mandated employees to prove that something can’t be done with AI before approving a new hire.

Yeah, although it was initially leaked,  the CEO later confirmed it publicly and it was only the first formal statement starting a wave, as few more CEOs have done the same since then.

The opposite is also happening – I've seen a lot of organizations ban or significantly restrict the use of these tools. In those cases, a lot of the smarter employees that understand the power of these tools ended up using them through their own devices.

We call it Shadow AI, or "bring your own AI," where people use tools like Cursor on their own laptops to accelerate development. But because the companies don’t know about it, the productivity gains are all for the individuals and not for the whole organization.

Michael Antyfantakis

founder of Vigilant AI²

This creates a paradox where proactive employees are using productivity-enhancing tools but are hiding them from their managers, filling their time with personal tasks!  Making the use of GenAI a much more formal declaration and an expectation of productivity increase, like Shopify has done, is a better way to go forward.

As you said, there are many ways to use GenAI to benefit the business. How can organizations strike a balance between encouraging creative autonomy and maintaining alignment with business objectives in GenAI initiatives?

GenAI can make everybody more productive, but it also manages complex information better than humans can. It can significantly help drive both autonomy of teams and alignment at the same time – the problem that we always had in Agile teams.

The autonomy increases because you get a powerful teammate who can help the team solve different problems. But as long as you record, capture, and monitor everything that’s happening, you can ensure that you get significant oversight across what multiple teams are doing and bring alignment. 

What’s more, GenAI itself can process the information that you’ve captured and recorded faster than any human possibly can.

Building agile AI-driven teams



Let's talk about the daily management of teams that use AI solutions. What are the best practices for structuring an agile team for generative AI projects?

The big issue is: How do you structure agile teams in a world where GenAI is there to support you in everything you need to deliver?

In agile teams, the main challenge has always been to keep them small but also get all of the skills that are required from product to design, engineering, data, and testing in order to be able to deliver an end-to-end product and lifecycle. With a GenAI teammate, you have somebody who is multi-skilled because they can do nearly anything on the team. As it improves, AI will become better than most of us in every single delivery discipline. 

The other point is the boost in efficiency and productivity. Until now, agile teams had to do a two-week sprint to deliver a feature. With GenAI as a teammate that can rapidly design, code, and test, that timescale is much shorter.

So you can create smaller teams, augment their capacity and skills using GenAI, and have them go much faster. From scrum teams of seven to nine people delivering a feature every two weeks, you can get to a team of two to three people delivering features daily!

Michael Antyfantakis

founder of Vigilant AI²

We already see a lot of that within Vigilant AI2. We have a very small team coming up with new features every couple of days, and the feedback loop is more rapid than ever before.

So we'll be cranking things out faster and the teams will be smaller because of the huge efficiency boost.

Yes. An agile product manager's job today is to bring together all of the elements of the product specification, the design, the development, the QA, and the final acceptance. They do that by utilizing resources within the team that handle each individual element of the lifecycle.

I believe that in one or two years product managers will increasingly utilise GenAI to handle most aspects of product development. But you will need more experienced people that bring things together. Engineers will do the real engineering – actually solving the more complex problem, which is how you make sure that the end-to-end solutions work.

How can you embed this cross-functional collaboration into the typical day-to-day workflow of Gen AI-enabled teams?

You need to think about GenAI as a teammate. You can have a designer agent for wireframes, a junior product manager agent to write your user stories, or a QA agent for automated testing. Based on the humans that you have within your team, you need to define what skills those humans bring, and what skills GenAI agents would need to bring, in order to cover all aspects of product design and delivery.

How does agile need to evolve to meet the challenges of such AI development? I’m talking about things such as dependencies, model drift, or unpredictability in output.

The emphasis needs to be much more on the continuous evaluation of those agents and models, and less on the elements of coding and fixing.

In the past, a lot of the effort in building a new solution was in how you actually designed and developed the code, and then tested deterministically to try and fix bugs in the code. This part of the process is going to be much smaller now because a lot of the coding itself can be done by GenAI, so you get automatically tested, with working code, quicker.

But what you need to evaluate is the less deterministic and more probabilistic output of those agents or models. This is more akin to coaching than debugging. It’s more like evaluating and training humans than identifying errors and fixing code. That's what a lot of the AI companies are doing. You hear about Evals that are automated tests of human-like output and behaviour, like the exams you take at school or uni. And also  RLHF,  reinforced learning through human feedback, which is a set of assessments where humans are involved to review the outputs of GenAI and then feed back their opinions and preferences so that the next model is improved.

So it’s more of a learning and feedback loop, rather than a testing and fixing loop.

Let’s talk about those highly skilled AI-driven humans. Have you heard of any strategies to attract and retain top AI talent that you need to build small teams with 10x output? This is a very competitive space, isn’t it? We can see the surge in AI job descriptions popping up everywhere. What are your thoughts on the job market?

There are a few different types of “AI talent”: 

One is the more obvious – people who understand how to create new GenAI solutions, such as data scientists and model developers. Those skills are extremely valuable for companies like Google or OpenAI that actually build the core LLMs (Large Language Models), or large banks that have a lot of proprietary data and can build their own models. But there are very few jobs like that compared to all of the work that can be done by using LLMs to build solutions. 

It’s a second type of talent that we now see emerging, and it's much wider in its adoption. For example, how do you take LLMs and do fine-tuning on top of them? How do you then use tools like Langchain or Graphs to build good RAG (Retrieval Augmented Generation) systems? How do you create and build agents on top? These are more data engineering and product type skills, together with all the evals that we talked about. This is where the majority of the skills in the market will likely start to evolve.

Then there’s a third type, which is not about developing any GenAI solutions but using them instead. Tools like Cursor can accelerate coding to solve a problem outside of the AI domain. To me, this is even more interesting because it will create a gap between old-school engineers taking a long time to do everything manually versus GenAI-powered engineers doing the same thing 10 times faster.

When engineers start to work in this way, they realize how much more productive they can be. Their skills become extremely valuable because they can do more things. But at the same time, they will be looking for an environment where they're allowed to do those things.

So if you hire an engineer that uses Cursor and forbid them from using it, they’ll look at you the same way that cloud-native engineers have been looking at companies forcing them to work on-premise in the last decade. They might not join at all, or if they do they will quickly become very frustrated and move on.

The way to attract talent is to make sure that you allow the safe use of the latest tools within your organization. Otherwise, the best talent will go elsewhere.

Cloud is a good analogy because there are few engineers that actually build cloud environments and platforms like AWS or GCP. The bulk of the market are all the engineers that actually use those platforms to develop other solutions, and then those who use cloud-powered and SaaS tools in their work to improve their productivity. The same three tiers are going to emerge in AI. They're all going to be extremely popular, but the pools are very different. You're talking about the 1%, 10%, and 90% of engineers.

Compliance in agile AI teams



Let's now talk about compliance, which is crucial for companies that have little experience with AI and wonder how to best introduce it into their development. 

First, in highly regulated industries like fintech or medtech, how can companies innovate quickly while remaining compliant with evolving regulations?

When an employee starts to use GenAI tools, they might inadvertently leak some data that they're not supposed to. They might unknowingly breach data privacy rules because they're sharing PII data with tools outside of the organization that aren’t controlled. They may be worried about sharing IP that can then be used to further train the models or harm the company.

These are the obvious challenges that companies have, but then there are many other types of challenges that come along. You might not leak any sensitive information, but you could ask GenAI tools to do things that are not within your job description, that expose the company to other types of risks, or that are biased and unethical.

For example, the EU AI Act says that you shouldn't use AI to evaluate humans. An employee could ask GenAI to help them write somebody's appraisal, which is fine, but then they might ask it to rate that person and their whole team based on such appraisals. If they use that output to evaluate their team, they could breach the EU AI Act regulation.

It's a gray boundary of what you should and shouldn't be doing with these tools. What we're saying is that you need to use the tools themselves to provide the necessary level of oversight, monitoring and compliance to make sure that employees know the boundaries.

You can train employees to use GenAI in a safe way, but if you also monitor what they do, you can always nudge or intervene when necessary. Some companies have started with the basics, which is protecting PII, sensitive data blocking or masking, etc. 

We started to look at how people use these tools, what kind of help they ask for, and how it aligns with the compliance obligations of a company. Our platform continuously monitors and reminds people what to avoid. It’s much more efficient than a training session they’ll forget in a week.

Do you think that highly regulated areas like fintech, banking, or government are always going to be less likely to rely on GenAI tools heavily, or is this going to level out as the knowledge about how to use these tools expands?

Financial services are a very good example. In the UK, the senior management, the people who are accountable and could even go to jail if their company employees breach compliance, will be much more conservative and worried about those individual use cases than in other industries. At the same time, the amount of productivity and efficiency you can get out of GenAI is going to be so immense. I don't think any company will be able to ignore the benefits.

You’ll see new companies and people in fintech who are more open-minded about how they use these tools, entering the market with efficiency that will be orders of magnitude greater than what traditional players are capable of. This will force legacy players to adopt these tools very quickly, but they will need to do it in a way that they can demonstrate to the regulators that they have the necessary guardrails.

This is where we think we have an interesting opportunity, which is providing the ability to use GenAI as the enabler of compliance enforcement. We’re creating GenAI compliance agents that can help the compliance department multiply its reach and its ability to monitor and govern.

How can such intelligent compliance agents and sandbox environments be used to enable safe experimentation in Gen AI projects, especially if somebody wants to play around with AI before they commit to it on production and go all the way?

You need to start in an isolated environment. That's what we do – we build our compliance agents in a sandbox that has the ability to host an open source LLM.

Very quickly, you can get a local model that is completely isolated from everything else, and you can use it to monitor and assess everything else that you're doing before you start to go outside of your sandbox. We're training our compliance agents in this isolated environment and then having them monitor, amend, or desensitize and approve what users ask or share before it can go outside of the boundaries of the organization.

These types of patterns can be used by teams experimenting with how they build agents for other purposes within an organization. We believe that the other emerging element is the concept of agentic AI. You develop agents which are autonomous, they're there to do a task. You can build a customer service agent or an HR agent that’s always there to help.

But if you want that HR agent to be compliant the same way an HR person would be, you need to have that HR agent check with a compliance agent that what they do is within company policy. You’ll start to see more of these multi-agent systems and workflows where individual agents play the role of an individual employee, and each agent takes on a different role.

General advice



Let’s sum things up. Given all that we've discussed, what advice would you give to tech leaders and executives who want to introduce AI safely into their workflows, but at the same time don't want to miss out on any benefits that AI provides?

Leaders should personally explore GenAI tools in their daily work – starting tasks they currently perform, but also with tasks that they would ask someone in their teams to perform for them to eventually use them for things that they never even imagined they could do, because it would have taken so much more time in the past. What I find is that people who have used GenAI tools enough reach some ‘WOW’ moments that completely change their perception. Leaders also need to reach these moments, and once they do, they can actually understand the benefits of using AI and get their employees to understand them as well.

Michael Antyfantakis

founder of Vigilant AI²

Resources



Can you also recommend some learning resources for those who want to explore the topic further?

As for books, I can definitely recommend The Coming Wave By Mustafa Suleyman and Co-Intelligence By Ethan Mollick. I also listen to podcasts such as The Artificial Intelligence Show or Lex Fridman. Also, go check out YouTube channels of Wes Roth and Y Combinator for more quality AI-related content.

What’s next?  Four strategies to build GenAI-powered agile teams in a safe and sustainable way



To unlock GenAI’s productivity gains without compromising compliance, Michael recommends four practical moves:

Treat GenAI as a teammate – not a tool. Use them to augment human capability, compress team sizes, and dramatically accelerate delivery.

Create a safe environment with guardrails in place, tailored to your company’s risks, roles, and regulatory obligations.

Use GenAI to automate compliance – deploy GenAI-powered compliance agents to monitor and govern other agents—ensuring responsible AI behaviour by design.

Lead by doing and experiment personally – the fastest path to organisational adoption is through first-hand discovery of what GenAI can (and can’t) do.

The organisations already embracing GenAI aren’t just gaining efficiency – they’re outpacing their peers by orders of magnitude. Those who hesitate risk irrelevance as faster, leaner competitors reimagine what’s possible.

Authors

  • Eric Ignasik

    Experienced software delivery consultant working in the IT industry. Believes that sales are a by-product of a healthy business relationship and added value - not a goal in itself. Born in New York; based in Krakow. Specialties include consulting, B2B sales, SaaS, IoT, and Software Development services.

Trusted by Saudi leaders. Built to scale with you

free consultation

Build future-ready platforms that align with Vision 2030.

Work with a product development team trusted by Saudi innovators for over 10 years.

Book free consultation

CTO vs Status Quo