New skills engineering teams need while adopting AI

"What we are hoping is that these tools will help us codify tribal knowledge. As we start to experiment with some of these tools, they will see the patterns that are not always necessarily voiced by the engineers themselves."

Key takeaways

  • Working with AI requires new skills. AI coding tools demand similar skills as managing human engineers - understanding capabilities, clear communication, and output evaluation.
  • Senior engineers excel with AI. Their deep contextual knowledge makes them particularly effective at leveraging AI tools, contrary to what many might expect.
  • Start simple and then scale. Begin with straightforward tasks, build confidence through early wins, then gradually tackle more complex challenges.
  • Context is everything. The more relevant context you provide, the better AI performs, but context window limitations require strategic choices.
  • Prompt engineering is here to stay. Rather than becoming unnecessary, prompt engineering skills are becoming more important as teams get more sophisticated with AI.
  • Production AI needs patience. While AI is promising for production systems, organizations should approach this carefully and systematically.

About

I'm Ben Chang, VP of Engineering at Guidewire Software, where I've been for 12 years now. Guidewire is a core system provider that supports the property and casualty insurance industry - basically, we build the platforms that insurance companies use to run their business.

I lead the App Platform engineering team, which builds the platform that our customer developers use. So my team creates the tools and frameworks that other developers use to build applications on top of Guidewire's core systems.

What has been your experience adopting AI for your engineering teams?

The impact has been really transformative for us. My teams have been trying out different coding agents to help make developers more productive, and we're finding that engineers can actually get production-worthy code out of these tools.

We're not just jumping in blindly though. We're going use case by use case, figuring out what works best and where these tools are most effective. It's a methodical approach to understanding where AI adds real value.

"We have trialed a bunch of different coding agents that will help amplify the productivity of our teams. A lot of our engineers are finding that they're able to get production worthy code out of these agents."

What are some interesting learnings you had while adopting AI in engineering?

This is where I learned something really interesting: using AI coding tools effectively requires skills that are much more like being an engineering manager than being a traditional software engineer.

Think about it - when you're managing engineers, you need to understand what they're capable of, give them clear instructions and context, and then evaluate their work. It turns out that's exactly what you need to do with AI agents too.

There's still the coding aspect where you need to make sure the technical output is correct, but there's a lot more management-style work involved than people expect.

"This skillset mirrors much more what an engineering manager tends to do rather than a traditional software engineer. Because what we're finding is that you have to become very familiar with what the agent is capable of doing, give the agent good instructions and good context about what the task is, and then you have to be able to evaluate the result."

Based on my experience, there are three main skills you need to develop:

  • First is deep understanding - you need to really know what the AI agent can and can't do. Just like you'd need to understand a team member's strengths and weaknesses.
  • Second is clear communication - you have to give the agent good instructions and context about the task. Vague requests lead to disappointing results.
  • Third is quality evaluation - you need to be able to look at what the AI produced and determine if it's actually good, correct, and does what you wanted.

It's basically the same skillset you'd use to manage human engineers, which I find pretty fascinating.

This might surprise you, but some of my senior engineers have been the most excited about these tools. At first glance, you might think junior engineers would be more eager to try new technology, but there's a good reason why senior folks are embracing it.

The key insight is that context matters most when working with AI. Senior engineers typically have the best understanding of business context, technical context, and strategy. They know how to fold all of that information into the development process, which makes them much more effective at using AI tools.

"Some of our senior engineers have been some of the most excited to use these tools because what these tools really allow you to do is become much more productive. When you are focused on context, it's the senior engineers who typically have the best understanding of the business context, the technical context, the strategy and how to fold all of those into the development process."

How about using AI for your systems in production?

My team hasn't explored production applications much yet - that's still mostly uncharted territory for us. We're interested in it and exploring different tools, but we're being careful about it.

One area where I see real potential is using AI to help codify tribal knowledge. The idea is that AI tools could observe patterns in production systems that engineers don't always voice explicitly, then help solve problems more systematically.

"What we are hoping is that these tools will help us codify tribal knowledge and so that we're hoping that as we start to experiment with some of these tools, that these tools will see the patterns that are not always necessarily voiced by the engineers themselves."

What's the imperative driving your AI exploration for production systems?

From a business perspective, I see clear value in anything that makes production systems run more consistently and with higher reliability. That's always going to have business benefits.

My organization is taking a comprehensive approach - we're looking across the entire development lifecycle to see where AI might help improve different areas. It's not just about one specific use case, but thinking holistically about the whole process.

"There's always going to be a business benefit to focusing on how to get our production systems running more consistently and with higher reliability."

What has been a surprise for you in your AI adoption journey?

Here's something that caught me off guard: when we started using these tools, there was this idea that prompt engineering might eventually become unnecessary - that the tools would get so smart you wouldn't need to be precise about how you ask for things.

But the opposite has happened. We're finding we need to do more and more prompt engineering, and we're getting better and better at it. This reinforces the idea that context is really key to using these models effectively.

"When we started with these tools, there was the notion that prompt engineering might be something that may eventually go away. But what we're finding is that we have to do more and more of it and we are getting better and better at it."

My team runs into a common problem: we have very large codebases, and AI tools have limited context windows. You can't just dump everything into the AI and expect it to understand your entire system.

This means we have to be very strategic about which pieces of context we include. It's definitely part of the challenge of working with these tools - figuring out how to give the AI the right information without overwhelming it.

"Sometimes the context window that you can provide is limited and so you need to be very specific about which pieces of context you include. We have very large code bases. And so that's naturally part of what comes with it."

What advice do you have for engineering teams starting on their AI adoption journey?

My biggest piece of advice is to expect a learning curve. A lot of people try these tools once, set their expectations too high, and then get disappointed when the AI doesn't magically solve all their problems.

My recommended approach is to start small: give the tools simple tasks, get some early wins, and gradually build up your understanding of what the agents can do. That's much more likely to lead to success than jumping straight into complex problems.

"Expect a learning curve. There are a good number of folks who give these tools a try and their expectations are sometimes too high for an initial attempt. I think it's very important to give these tools simple tasks, get early quick wins, and then try to build your knowledge and familiarity of what the agents are capable of doing."

I'm particularly interested in using AI to get better insights into customer experiences. Right now we have all the typical analytics metrics, but I think AI could help us understand customers in a deeper way.

Specifically, I'm hoping to analyze support cases and actual language from customers to get insights we don't have today. The goal is to catch customer frustration earlier in the process, before it escalates to executive complaints.

"I'm really hoping to get more pointed and qualitative feedback about how customers are experiencing our products. Through support cases and language from actual language that we can interpret and highlight, I think I'm expecting to get a little bit more insight than we have today."

I have a theory about customer escalations: by the time a customer escalates to the executive level, they're usually very frustrated. But that frustration actually shows up much earlier in the support ticket process, when they're struggling to get something done.

If AI tools could pick up on that early frustration and alert our support teams sooner, we could potentially resolve issues before customers get really upset. It's about being proactive rather than reactive.

"If we're able to use these tools to pick up on that frustration and highlight to our support engineering teams earlier in that lifecycle, I think that we will be able to produce a better customer experience."

What's your most memorable on-call experience?

My team spent nearly a month trying to track down a bug that caused our software to crash consistently in lab environments, but never in the field. After weeks of investigation, we finally discovered the cause: a router in the system had received a firmware update that was inadvertently dropping one single bit in an MTU packet. Just one bit! That tiny error was enough to crash our entire software system.

"What it came down to was that one of the routers in the system had had a firmware update. And in that firmware update, it was inadvertently dropping one bit in an MTU packet. And so that one bit caused our software to crash."

Handoff your headaches to Resolve AI

Get back to driving innovation and delivering customer value.

Join our community

©Resolve.ai - All rights reserved

semi-circle-shape
square-shape
shrinked-square-shape
bell-shape