Leaders must actively re-engage skeptics to overcome early hiccups with AI
"I would love for AI to help us reduce the amount of toil that this complex environment imposes on our engineering teams. I think that there is a lot of opportunity in that, I think that is important work, but it is not the most exciting work."
Key takeaways
- Implementing AI across all functions (PM, design, engineering, QA) creates broader efficiency gains than just focusing on engineering.
- Legacy systems create AI barriers. Teams working on newer codebases see significantly better AI ROI than those maintaining 12-year-old legacy systems.
- Leadership must actively re-engage skeptics. It's not enough to provide tools. Leaders need to help engineers overcome early negative experiences with AI.
- Context window limitations are real. Large, complex codebases bump against fundamental LLM limitations, requiring sophisticated management strategies.
- Individual curiosity trumps seniority. Personal learning agility and curiosity matter more than traditional experience levels for AI adoption success.
- Operational burden creates AI opportunity. Super apps and complex platforms that require significant operational overhead are ideal candidates for AI-powered toil reduction.
About
I'm VP Engineering at Careem, the premier super app based in the Middle East. I've been with Careem for over 6 years and previously worked at Microsoft. Careem is pretty massive. We're talking about a platform with 75 million registered users, 3.5 million drivers earning on our platform daily, and 20+ services all within one super app.
The scale is impressive: we operate across 9 countries and 70 cities, predominantly serving the UAE and broader Middle East region. About four years ago, Uber acquired Careem in one of the largest exits in that region, and since then we've spun out as an independent company focused on building our everything app. In 2023 we welcomed investment from ‘e&’, the regional telecoms operator and technology investor. We now have two strong partners in Uber and e& helping us to scale the everything app across the region.
What are your experiences with adopting AI for your engineering teams?
We've taken a really comprehensive approach to AI adoption that goes way beyond just engineering. We've implemented AI across the entire product development lifecycle - in product management, design, engineering, and QA.
Our product managers use AI chat agents to help build better PRDs (Product Requirements Documents). Our designers use AI to create brand-aligned images and UI elements. Engineers experiment with various coding tools. And our QA teams use AI to test our personalized application. It's a truly organization-wide initiative.
"We've tried to adopt it from an engineering standpoint across the entire gambit of product engineering, not only in software engineering, we're also in product development, product design, and in QA as well."
What are the patterns you noticed in your engineering teams while adopting AI?
I've observed that software engineers basically fall into two camps when it comes to AI.
- On one side, you have engineers who are extremely curious and really want to experiment with the latest and greatest technology. They dive right in and want to try everything.
- On the other side, you have engineers who are very reserved and suspicious of new technology. Maybe they tried AI tools in the early stages, realized they weren't as good as promised, and then wrote them off as hype.
"Software engineers are by nature either in two camps: either extremely curious and they really want to go out there and experiment with the latest and greatest technology, or they're very reserved, they're very suspicious of this new technology."
For the engineers who became skeptical after early disappointing experiences, I have to play an active role in re-engaging them. It's not enough to just provide tools and hope people use them - I have to actively excite people and get them to give these tools another try.
This is especially challenging because these engineers aren't necessarily wrong to be skeptical. Early AI tools often were overhyped. But the technology has improved significantly, so I need to help people understand that what's available today is different from what they tried before.
"Now, it's on us as leadership to really excite them and get them to really start using those tools again."
I've noticed that AI adoption success varies significantly across different teams, and it comes down to the complexity and age of the systems they're working on. Teams working on our core marketplace (systems that have been built over 12 years) have a much harder time getting value from AI.
These legacy systems have a lot of technical debt and complexity, making it difficult for teams to go all-in on AI and see good ROI. But teams working on newer parts of our stack, like our food and groceries verticals or our restaurant exploration product, have found much more success.
"Software engineers work on a large part of our codebase which has been built over the last 12 years over Careem's lifespan. There is a lot of technical debt and complexity within that codebase. It makes it a lot more difficult for teams working on some of the core kernels of Careem's marketplace to really go all in on AI and get that ROI."
I've found that individual curiosity and learning agility matter more than traditional experience levels when it comes to AI adoption. Some people are naturally more curious and up to speed with new technology, while others are still catching up.
This suggests that successful AI adoption is as much about mindset and personality as it is about technical skills or years of experience.
"I think it also comes down to the individuals that we have on the team. Some of them are a little more curious, a little more up to speed with this technology. Others, they're still catching up."
What are the challenges you observed while adopting AI in systems with large codebases?
When we try to use AI for building entire features, we run into several significant challenges. First is the precision problem - it takes a lot of upfront prompting and working with AI to come up with a concrete, end-to-end plan for what needs to be built. Even then, the AI sometimes gets confused and goes off track.
Context windows are also a major limitation. It's impossible for an AI agent to keep our entire large codebase in memory. Plus, a lot of our legacy code isn't built on the most popular frameworks that these LLMs are trained on, which doesn't help set the AI up for success.
"It takes a lot of upfront prompting, maybe working with an AI to come up with a concrete end-to-end plan of what needs to be built. And even then the AI will sometimes get confused, get off track. Context windows are the size that they are, it's impossible for the agent to have the entire codebase in memory."
How are you approaching AI adoption for production systems?
We're taking a measured, step-by-step approach to production AI. We start with detection and analysis, move to customer communication during issues, and only gradually approach autonomous system changes.
This gradual approach allows us to build confidence and learn from each stage before moving to the next level of automation. It's a responsible way to introduce AI into critical production systems.
"Right now the focus has been around detecting issues and helping us root cause problems. Getting an agent to actually make changes on our production systems is something that we're still not comfortable with."
We're being very cautious about production AI implementation. We're exploring AI for managing customer expectations during outages - like automatically prompting customers that there are issues and their order might be delayed. But getting an agent to actually make autonomous changes to production systems? That's something we're still not comfortable with.
"It will take some time before we are completely comfortable with getting an agent to do it fully autonomously, although there are plans for that."
I would love for AI to help reduce the amount of toil that our complex environment imposes on our engineering teams. This operational burden is important work, but it's not the most exciting work for engineers.
By using AI to handle more of the routine operational tasks, engineers could focus on more engaging and creative problems. It's about freeing up human creativity rather than replacing humans entirely.
"I would love for AI to help us reduce the amount of toil that this complex environment imposes on our engineering teams. I think that there is a lot of opportunity in that, I think that is important work, but it is not the most exciting work."
What is your mission with AI adoption?
Our mission at Careem is to simplify and improve our customers’ lives by taking complexity away from their daily experiences and codifying it in our systems. But this means we carry a lot of operational burden as a company.
Operating smoothly in this complex environment often requires human intervention when we encounter production hiccups. This creates a lot of necessary but not particularly exciting work for engineers.
"Our mission is to simplify and improve people's lives, and we do that by taking away a lot of the complexity from their lives and codifying it in our systems and our software. That means that we carry as a company a lot of operational burden."
What's your most memorable on-call experience?
Here is an interesting scenario: being woken up at night to acknowledge incidents because team members in other locations have poor cell connectivity. Since I live in Dubai with excellent connectivity, I often get paged when our Pakistan-based team is on call but can't acknowledge incidents due to poor cell coverage.
These incidents are usually low-impact issues that self-resolve but need to be acknowledged within SLA timeframes. It's the kind of operational overhead that shows why better automation and AI tools could be really valuable.