AI adoption beyond developer productivity
“AI was supposed to do all the chores so that I can do the creative work, but it ended up doing all the creative work and I’m doing all the chores for it.” Hari Ramanchandra (Salesforce) “Your consumers care about their order being delivered on time. They don’t really care whether we’re using Datadog or Resolve—so we’d rather buy tooling than build it, and spend our engineers’ time on customer value.” Mandar Rahurkar (DoorDash)
Key takeaways
- Focus beyond coding tools: While developer productivity gets attention, the real value is in release velocity, incident management, and reducing operational toil.
- Developer quality of life over productivity: Frame AI adoption around eliminating work nobody wants to do (3 AM calls, debugging flaky tests) rather than just making coding faster.
- Tribal knowledge democratization: AI's biggest impact may be scaling senior engineer expertise to junior team members, reducing burnout on key personnel.
- Buy over build in fast-moving field: With AI evolving so rapidly, building in-house solutions often results in obsolete tools before completion.
- Tangible cost evaluation: Evaluate AI solutions based on concrete value (preventing outages, reducing engineering time) rather than abstract accuracy metrics.
- Pre-production as testing ground: Low-risk environments like test systems are ideal for full AI autonomy without human intervention.
- Data sensitivity requires careful evaluation: Balance convenience with privacy, especially for core business data and customer information.
- Current AI operates like interns: Today's models need specific instructions, but the goal is abstract delegation like working with senior engineers.
- Embrace experimentation: The technology is moving too fast to wait for perfect solutions - organizations need to try things and iterate quickly.
- Last-mile operations present the biggest opportunity: Incident management and production operations are where engineers most want AI assistance, not necessarily the "creative" coding work.
About us
Hari: I'm VP of Engineering at Salesforce, leading engineering operations and DevOps infrastructure, focusing on analytics products. I come at this from the operational side of engineering.
Mandar: I support the ML group at DoorDash. This is actually the fourth revolution of AI that's happened in my lifetime, which tells you something about how fast AI is moving rather than how old I am! DoorDash is a last-mile delivery marketplace, and I bring a product-oriented perspective to this discussion.
What are our top 3 AI strategies for the next 6-12 months?
Hari: It's funny - when you ask that question, it reminds me of those videos where Sundar Pichai is saying "AI, AI, AI, AI." A lot of us leaders are doing that in our organizations. We keep asking engineers about AI adoption, beating them up asking "Have you adopted AI? What's your adoption number? How many times have you chatted with Cursor?"
There's a lot of focus on adoption, but after that initial push, I've observed we need to fine-tune our strategy on focus areas. When I look at how our engineers are adopting AI and the feedback I hear, it reminds me of that Reddit post where somebody said AI was supposed to do all the chores so they could do creative work, but it ended up doing all the creative work and they're doing all the chores for it.
Think about it - what percentage of your time do you actually spend coding? Maybe 20-30%? But there's so much focus on coding tools like Cursor. The actual toil is in the rest of the pipeline. My architect keeps saying all the focus is on the first mile, but what about the rest of the miles?
Based on that, we're focusing on three areas:
- Developer productivity (the obvious one with code generation)
- Release velocity and avoiding toil - like figuring out test failures, flaky tests, merge conflicts
- Production incident management - those 3 AM calls nobody wants
Mandar: I bring a product-oriented perspective to this. Instead of calling it developer productivity, I love calling it developer quality of life. We want to provide safety nets so developers aren't doing grunt work like writing unit tests, and so they're not getting woken up at 3 AM or taken away from their anniversary dinner.
Our three focus areas are:
- Developer quality of life - eliminating the work nobody wants to do
- Ability to move fast without breaking things - at our scale, any outage beyond 10-20 minutes is a conversation with my CEO, which I'd rather not have
- Scaling tribal knowledge - as we add new engineers, how do we take the battle-hardened knowledge from senior talent and inculcate that in junior engineers?
Why is tribal knowledge such a critical problem?
Hari: This is a really salient point because what I've observed is there's always one really strong engineer who has a lot of product background and ends up being on every call, regardless of who's officially on call. Team members burn out because of this.
If AI or any tools can solve this problem by uplifting all engineers and providing them with tribal knowledge and context awareness, that's a huge impact to the organization.
Mandar: Exactly. It's about democratizing that expertise so you're not always relying on the same senior people to solve every production issue.
How do we approach build vs. buy for AI solutions?
Mandar: I learned this lesson when I was at Yahoo. Marissa Mayer had a great example: when we decided to move out of search, the analogy was that we were going to buy grapes from Google and add a value layer on top by making wine.
We think very similarly about this. What do our users and consumers care about? They care about their order being delivered on time and the app experience. They don't really care whether we're using Datadog or Resolve.
When it comes to tooling, we'd rather buy than build. There's a transition point where we deeply care about user privacy and cost, but generally we'd rather spend our engineering time meeting user expectations rather than building developer tools.
Hari: Plus one to that. The field is progressing so much that by the time you think of building something, some other product has already come out and your idea is obsolete. It doesn't make sense to invest effort building something that's readily available when you could be focusing on time to market for your own products.
The key things to keep in mind are your customer data and where there's risk of exposure. Also, where context awareness is really important and the agentic layer has to intertwine with your workflows - if it's core to your business and IP, that's where you might consider building rather than buying.
In engineering operations where I come from, none of this is core to the product - it's enhancing the product. So we're much more open to off-the-shelf solutions, even though most of my engineers might disagree because they want to build.
"The field is progressing so much that by the time you think of building something; some other product would have come already and our idea is already obsolete."
How do you handle data sensitivity in AI solutions?
Hari: Public cloud adoption has already made this conversation easier than before. I see companies like OpenAI offering services with silos for each organization. It's a matter of gaining trust and ensuring that data used for an organization isn't used for the vendor's benefit.
Transparency is critical here, especially when we're exposing our logs and customer records - the belly of the business. It depends heavily on the area you're operating in, especially when dealing with PII data versus scrubbed transaction data.
What criteria do you use to evaluate AI solutions?
Mandar: I think about this differently for AI-first products versus system products. Things are moving so fast in the AI landscape that by the time I have a team of three people building something, that technology is already outdated.
Instead of striving for accuracy - because you can go on vacation, come back in 3 months, and there'll be a better model - I focus on value addition. Can we bring this to our engineers and add value to their lives as quickly as possible?
The other way to look at this is cost. What would it actually cost us to build this internally versus buy it before it becomes outdated?
When I define cost, it's very tangible. If I have two outages a year and can avoid at least one, there's a tangible cost associated with that. Then if I have to do it with my own engineering team, what would it cost me in engineering time?
"The cost for me is very tangible. so if I were to say I have two outages a year and if I'm able to avoid at least one — then there's a tangible value associated with it."
What's the next frontier for solving hard problems with AI?
Hari: In general, I feel like what we're looking for is better developer quality of life. Right now it's almost like Tesla FSD asking you to keep your hands on the steering wheel all the time. If you drift off, it says "I'm out." What's the point of having a self-driving car if you want to be more hands-off?
I think value addition should become much more autonomous at some point. Don't come back and tell me "this happened" and then I have to ask another question.
One area I'm excited about is pre-production environments. How available are our test environments? Many times test environments are down, developers can't run tests, our quality team can't run tests. That might be a low-risk, reversible scenario where AI can be in full-on FSD mode with no intervention required, no questions asked to the developer.
That's the direction that will really add value by removing grunt work completely and eventually eliminating those 3 AM calls, at least for pre-production environments.
Mandar: I don't think AI is necessarily doing all the sexy work and leaving us with chores. We're still in a very early stage where AI models are operating like interns. I'd love to see them become senior engineers where we can delegate at a more abstract level.
When you have an intern, delegation is very specific. As you work with more senior engineers, it becomes more abstract. I'd love to see agentic systems head in that direction, and I don't think that's too far away.
"AI I think is or at least some of these models are operating as an intern and I would love to see it becoming uh uh we can delegate at a more abstract level right so when we have an intern like the the the delegation is very specific as we start talking to more senior engineers it's more abstract."
What's your vision for the future of AI in engineering?
Hari: First, thank you for this great event. What's refreshing is that I actually did a cold call to Resolve AI because one of my colleagues found out about the company. That shows the desperation we're in right now!
Resolve AI has touched upon the right area to focus on - the last mile. That's where AI can really make a difference. It's an area where developers and engineers are willing to hand over work and not worry about "AI will take over my job." That part of the job, we definitely want AI to take over - not the sexy part.
I'm most excited to see some of these pain points and PTSDs getting resolved for most of us.
Mandar: AI is here to stay, whether we like it or not. It's better for us to embrace it. The philosophy we've taken is to try things out. Things may not stay in the same shape and form, but they're moving really fast.
The models that were cutting-edge a year ago? Open source models can do that right now. Things are moving incredibly fast, and if you don't hitch onto this wagon, everybody's going to be left behind.