Ten Reasons Your Technical Team Will Keep Growing – Despite AI

Written by | Mar 5, 2025

Are you waiting to hire technical talent because you think the next LLM release will make your engineering team obsolete? That’s understandable given that AI undeniably helps software developers with parts of their job, but these impressive improvements in task efficiency do not make the overall job of your software team obsolete in any way. It should not stop you from still hiring the right kind of talent and growing your team in the right ways that will help you to better adopt AI in your technical and business practices. 

There’s no doubt that AI is transforming software development—and that’s a good thing. Like past technological revolutions, this one is more about reinvention than replacement. As industry luminary Tim O’Reilly recently wrote:

“[AI] is not the end of programming. It is the end of programming as we know it today … It is the beginning of its latest reinvention.”  Tim O’Reilly, Feb 4th 2025

As the founder of O’Reilly Publishing, Tim has had a first-person view of every major technology change since his company contracts with top talent to write books about technology. Tim’s article talks about how past revolutions also reinvented the software engineering role (think of the web, mobile development, no-code/low-code frameworks, etc). As someone who started my career in assembly programming, jumped into the web and worked for early internet e-commerce startups, and then has gone on to see every reinvention since, I completely agree.

Your technical team will not only survive the AI revolution but will become more critical than ever. The key is understanding that AI is a powerful collaborative tool that augments human expertise, but does not replace it.

With that in mind, here are ten reasons why your technical team will still need to grow in 2025 and beyond, regardless of the impact of AI on software development. I’ll go into each one in more detail below, and then at the end, I’ll share the only reason why you should decrease your team size.

  1. Chat-Oriented Programming Still Requires Technical Expertise
  2. Large Language Models Demand New Technical Skillsets
  3. Agentic AI Systems Need Complex Human Integration
  4. LLM Integrations Require Comprehensive Testing
  5. AI Technologies Introduce Sophisticated Deployment and Scaling Challenges
  6. Production Environments Demand Nuanced Triage
  7. Legacy Systems Need Maintenance and Migration
  8. AI Systems Require Ongoing Bias Mitigation
  9. Smaller Teams Increase Human Dependency
  10. Emerging Technologies Will Create Novel Technical Roles and Specializations

 

Let’s break them down.

#1 – Chat-Oriented Programming Still Requires Technical Expertise

Steve Yegge recently wrote about “the Death of the Stubborn Developer”. He talks about the challenges facing junior software developers in the industry today, but perhaps even more so, how “stubborn” developers are going to be left behind in this industrial revolution.

Steve’s call to action for both junior developers and more senior stubborn developers is that they need to learn “Chat-Oriented Programming”, or CHOP as he abbreviates the term. This is the way that many developers are already working, including our teams at AgilityFeat and WebRTC.ventures.

In Chat-Oriented Programming, the developer is coding side by side with an LLM, and regularly asking that LLM for help with generating executable code as well as test cases for that code. The LLM also helps with other important tasks like code refactoring, enforcing coding standards, and brainstorming alternatives paths or algorithms.

This is done via the developer’s IDE using a code-completion model that helps to suggest code in-line as you write it. Or, with a side window open to publicly available LLMs like ChatGPT.

The quality of the code generated is dependent on the quality of the developer’s prompts, and how well they stitch answers together into a coherent system. The CHOP developer also needs to be on the lookout for hallucinations in the LLM. Even though they are less frequent when generating something discrete like code, an LLM can still supply incorrect, incomplete or misleading information in its quest to please you as the user. 

Working with an LLM represents a higher level of coding abstraction, but it is emphatically not a replacement for technical expertise. Instead, it demands developers who are technically proficient, adaptable, and genuinely excited about learning innovative working methods.

#2 – Large Language Models Demand New Technical Skillsets

In addition to using AI to help us write applications, modern software teams also need to become experts at integrating LLMs and AI into their existing applications, in order to offer new functionality to their users.

This is what is broadly called “Prompt Engineering” and it’s much more technical than just interacting with a consumer level LLM like ChatGPT. Engineers need to build the hooks into their system for communicating with LLMs programmatically via APIs and SDKs. Then they need to programmatically create “prompts” to send to the LLM. And finally, they must process the responses correctly to give the appropriate results to their users.

This is still very technical work that requires a combination of software development and DevOps skills to integrate the LLMs with custom data stores, custom UIs, and legacy applications, and to deploy it in scalable cloud infrastructure.

I spoke with John Berryman in a recent episode of the Scaling Tech Podcast about the topic of Prompt Engineering, if you want to learn more about this incredibly interesting area of software engineering. John is author of “Prompt Engineering for LLMs: The Art and Science of Building Large Language Model–Based Applications”. We talked about all of the new skillsets that software engineers need to learn around selecting a model, prompting that model, integrating it into their applications via prompts, and much more.

#3 – Agentic AI Systems Need Complex Human Integration

The most visionary prediction you may be hearing right now is the concept of Agentic AI. This term is not always used consistently, but it generally means having an LLM that can actually go out and do something in the real world. For example, ChatGPT is an LLM that takes your text-based questions and provides text-based responses, which you must read or process programmatically. An Agentic AI, on the other hand, is an LLM that can interact with other systems and perform tasks on your behalf.

In a consumer perspective, this might mean asking an LLM to order takeout food for you and letting it use your credit card and other personal information it knows about you to order your favorite pizza to be delivered to your home.

When applied to software engineering, this concept might mean that someone on the business side could write a prompt like: ‘Find all lost customer leads who meet certain criteria, craft a marketing campaign for them, and generate a discount code in our purchasing system for re-engagement.’

If a business user could write a prompt like this and trust that it would work correctly—without hallucinations or frustrating customers with bad messaging—then this prompt could replace the need for a software engineer to write an entire piece of code. Agentic AI for managing software applications and IT systems is not yet close to that vision, and even if/when it reaches that point in the future, there will still be a lot of software engineering work required to integrate your business systems with the Agentic AI tools.

Early in my career, I worked on several internally developed CRMs and ERPs to support the business side of the companies I worked for. Tools like Salesforce made that custom development work obsolete. However, many technical experts today build entire careers around creating integrations for Salesforce. The same will be true for Agentic AI!

Speaking of Salesforce, this quote from the former co-CEO of Salesforce, Bret Taylor, is interesting:

“That last mile of taking a cool platform and a bunch of your business processes and manifesting an agent is actually pretty hard to do,” Bret explained. “There’s a new role emerging now that we call an agent engineer, a software developer who looks a little bit like a frontend web developer. That’s an archetype that’s the most common in software. If you’re a React developer, you can learn to make AI agents. What a wonderful way to reskill and make your skills relevant.” – Bret Taylor, as quoted in Tim O’Reilly’s article

#4 – LLM Integrations Require Comprehensive Testing

When you integrate an LLM into your application … let’s say to help your customers plan a vacation, seek medical advice (risky!), choose the best loan (also risky!), or just order take out food … there is still an underlying challenge:

How will you test it?

By their very nature, LLMs provide answers that are in readable formats, like a normal human conversation. As such, they are not repeatable in most cases. Each time you test it, you will get a different answer. Each different answer is hopefully still useful, and still truthful, but that is something you’re going to need to test before and after deploying your LLM application into the wild. This is a whole new skillset for both developers and testers to acquire, and the best practices are rapidly changing. 

LLM testing also will not stop when you make your first deployment to production. Models will continue to be updated and new models will enter the marketplace. This is a form of testing that will need a creative combination of automated testing and judgment from real actual humans. 

Testing LLM integrations will certainly require reskilling your existing team, as well as bringing in new team members with the right experience to do this kind of work.

#5 – AI Technologies Introduce Sophisticated Deployment and Scaling Challenges

Every type of technology requires unique expertise to scale it. To scale a traditional website, you need DevOps experts who know how to use tooling like Kubernetes on major cloud platforms like AWS or Azure. If you’re using a unique type of technology, like our team at WebRTC.ventures does when building real-time video and audio communication apps, the scaling needs are different. You need more highly skilled DevOps experts who understand the unique needs and scaling constraints around media servers and how to configure load balancing around those servers.

A similar thing is happening with LLMs. When you build an LLM into your application you integrate large datasets with third party models, probably hosted somewhere else in a SaaS fashion. You need to communicate in a very low latency way because your customers are waiting for the results! This is a complex problem that I will be talking about in Episode 100 of WebRTC Live on March 19 with one of our WebRTC.ventures clients, AVA Intellect. Working with a Voice LLM based application that runs inside of a standard meeting tool and has to integrate with third party models using both speech and text in a very low latency way is not easy to do, It requires special expertise like our team at WebRTC.ventures has developed, and which we can help you hire for via AgilityFeat.

As your company starts building LLM-based applications, you’ll discover that one of your biggest expenses is the cost of compute for that LLM. The more contextual data that you need to give that LLM in order to give you useful answers, the more dramatically your costs will rise. Human talent with DevOps expertise in scaling for LLMs, and development teams who understand how to make most efficient use of prompts and the tokens exchanged, will be very valuable to you. The cost of that human talent will likely be much lower than the cost of your LLM running at scale, so this is an important area to increase your investments in human talent.

#6 – Production Environments Demand Nuanced Triage

Eventually, Agentic AIs may be able to identify issues in production and even suggest code changes to fix bugs—perhaps even before your users discover the issue. When/if the technology reaches that point, Agentic AIs will become valuable partners for technical teams. 

Still, we will always want humans in the loop. Technology already exists that allows companies to deploy code directly from a GitHub commit by a junior developer, automating testing and production deployment without human intervention. However, most companies still introduce a delay by having engineers and testers review PRs and run manual tests before the change reaches production. Why don’t companies fully utilize this level of automation, which doesn’t even involve AI? It’s simple: because humans are valuable in the loop, improving overall system quality. For most companies, that’s worth the short delay.

LLMs are fantastic tools and they will only get better. But humans are still incredibly effective problem solvers, and that’s ultimately what you want when diagnosing issues in production.

As Machine Learning researcher and computer scientist, Chip Huyen, said recently on The Pragmatic Engineer podcast:

“We tend to confuse the most salient part of an activity with the job itself … Software engineering is about solving problems” – Chip Huyen, author of AI Engineering

#7 – Legacy Systems Need Maintenance and Migration

If I go through my connections on LinkedIn, I guarantee I can still find people who are actively writing code in COBOL. That’s a programming language that was already considered obsolete by most people when I graduated engineering school in 1997. Yet COBOL still serves a useful purpose in many companies, especially financial institutions. Legacy systems have a way of living on forever. Managing Technical Debt is a constant challenge for software engineers and a topic that I discussed with Lou Franco on episode 52 of the Scaling Tech Podcast.

Even if you could throw away legacy systems and start fresh with AI-driven applications, you still need talent with the latest skills to build it. You also need developers to maintain the old applications in parallel to working on the new, and then to migrate it. 

Software architecture is evolving, too, and the architectural skills that designed your legacy system are not the same ones needed to build its replacement. I spoke with Jean-Louis Quéguiner of Gladia.io on the Scaling Tech Podcast about “Build vs Buy with AI”. We talked about deciding whether to use off the shelf LLMs or to fine tune your own model. His advice is equally true whether you are building a new system from scratch or migrating a legacy system to incorporate the latest technologies.

#8 – AI Systems Require Ongoing Bias Mitigation

One of the most over-the-top claims is the idea that the Org Chart of the future will be companies with only a few employees and only one engineer. The assumption is that this AI tooling will make coding so easy that future companies will only need a single software developer, if they need software developers at all.

The thing about LLMs is … they basically do what you tell them to do. They are designed to predict the next most likely word or code that will make you happy, based on the prompt (i.e., the “input”) that you give them. They are an incredibly sophisticated and powerful tool, but the classic data science joke still applies:

Garbage In = Garbage Out

Yes, an LLM is only as good as the data that it is trained on, but that’s not what I’m referring to in this case. I’m referring to the false notion that a single developer can provide a simple prompt to an LLM that will adequately cover the full scope of the application they need to build. I’m referring to the simplistic notion that a single engineer can accurately and completely give an Agentic AI all the direction that it needs to self manage a task without further human input.

If there is just one engineer writing a prompt or directing every Agentic AI process in the company of the future, that entire system will work according to the decision biases of that single engineer. The best way to prevent decision biases is to have a team of people working in a sustainable fashion, so that a variety of viewpoints are represented in the system.

The internet startup bubble created an era of Cowboy Coders and Rogue Developers, and our team at AgilityFeat were hired more than once in the early days to clean up their mess. A single rogue developer with AI tooling will still produce subpar work, just more efficiently.

#9 – Smaller Teams Increase Human Dependency 

This one is easy to explain because I’ve yet to meet a person who enjoys being interrupted when they are on vacation. If you scale back your tech team believing coding assistants make them so much more efficient, or you only hire a single developer because “AI will handle the rest”, you are setting yourself up for failure.

Just as a single server is a point of failure for your application, and no company would build a good business around just one server, the same is true for a single engineer—no matter how much AI tooling they have at their disposal.

Small teams are great up to a point, but you still need to have redundancy in every role. Your team members will want to go on vacation. And sometimes, no matter how cool you are, they will also want to take another job. The classic question of “what happens if this person gets hit by a bus?” cannot be adequately answered by “AI tooling”.

For the sake of team morale as well as redundancy planning, don’t buy into the hype that the org chart of the future has only a single engineer on it.

#10 – Emerging Technologies Will Create Novel Technical Roles and Specializations

I’m ultimately a techno-optimist, even if I think some of the hype around AI is overblown. There are so many positive changes that AI has and will bring to software engineering teams (more on that in my post, The Impact of Generative AI on Software Development Teams ) and to building radically new applications with functionality we could not even imagine a few years ago.

Still, there will be some job titles that go away, or at least go out of style, but also many more  will be created or transformed. Bret Taylor talked about “Agent Engineers” who could be former front-end developers that have developed particular skills around controlling and orchestrating different Agentic AIs and their integrations into other systems. John Berryman’s book is basically a guidebook for software engineers to become “Prompt Engineers”.

Agent Engineers and Prompt Engineers are good examples of what you may call your newest job postings, in addition to still hiring for the more traditional titles of Software Developer, Quality Assurance, and DevOps Engineers that you have hired for in the past.

Rapid technological change may reduce your reliance on some job titles, but it’s also going to create new ones that you have never hired for before, and that’s where leveraging experts like our team at AgilityFeat can be a big help when you need to grow your team affordably.

And now … the one (and only) reason you should reduce your team size

I’ve given you ten detailed reasons why AI does not mean you will hire fewer technical roles in 2025 or beyond. There is still one situation when you should consider reducing your team, and it has always been there:

You have not achieved a successful Product/Market Fit.

Success is measured in different ways for different organizations. However you define it in yours, if you can’t achieve and you don’t have the ability to invest in a pivot, then by all means, reduce the size of your team. But don’t hide behind “the impact of AI” because it sounds more socially acceptable than, “we just weren’t making the profits we promised our investors.” 

Using AI as an excuse is damaging because it creates a fear in other companies that they should be concerned about the impact of AI on their workforce, instead of looking at it more pragmatically and finding ways to better achieve their mission with the assistance of AI.

How not to get replaced by AI

The common refrain is still true: “I won’t be replaced by AI, but I might be replaced by another software engineer who knows how to use AI better than I do.”

The best way for anyone, in any job, to not be “replaced” by AI is to learn how to use AI to augment their own skills and make themselves better at their job. The people who learn how to do that are the ones who will succeed in this new reinvention of software engineering.

As the technology landscape continues to evolve, organizations will need to focus on recruiting and developing talent with AI integration skills. This is the most cost efficient way to find that cutting edge human talent with the AI experience to take your team forward. AgilityFeat can help you find this talent in Latin America, and build you a custom team of AI-enabled talent. Contact us today for a free consultation. 

About the author

About the author

Arin Sime

Our CEO and Founder, Arin Sime, has been recruiting remote talent long before it became a global trend. With a background as a software developer, IT leader, and agile trainer, he understands firsthand what it takes to build and manage high-performing remote teams. He founded AgilityFeat in the US in 2010 as an agile consultancy and then joined forces with David Alfaro in Latin America to turn it into a software development staff augmentation firm, connecting nearshore developers with US companies. Arin is the host of the Scaling Tech Podcast and WebRTC Live.

Recent Blog Posts