AI can 10x developers…in creating tech debt
39 mins read

AI can 10x developers…in creating tech debt


They dive into the uneven productivity results of AI tools, how tech teams are evolving their roles and work in response to these massive technological shifts, and what the nervous developer can do to maintain joy in their work.

Founded by computer and data scientists from the University College London, TurinTech automates your code optimization so you can deploy better AI models. Preview their new Artemis coding agent for free.

Connect with Michael on LinkedIn.

User Adam Franco won a Stellar Answer badge—and this week's shoutout—for their answer to How can I delete a remote tag?.

TRANSCRIPT

[Intro Music]

Ryan Donovan: Hello everyone, and welcome to the Stack Overflow Podcast, a place to talk all things software and . I am your host, Ryan Donovan, and today we're talking about a new category of AI-generated tech debt. AI has created a lot of new things for us, and some of the things it's caused are problems. So, we're gonna be talking about why we're not getting the productivity and how we can get better productivity from AI. And my guest today is Michael Parker, VP of Engineering at TurinTech. So, welcome to the show, Michael.

Michael Parker: Thanks, Ryan. Great to be here.

Ryan Donovan: So, before we get into the meat of the topic, we like to get to know our guests a little bit, so tell us a little bit about how you got into software and technology.

Michael Parker: My dad was a programmer growing up and we had a lot of computers kicking around, so I started programming when I was about 11. We had a ZX Spectrum that's coding some basic. I come from quite a big family, so I made a few games with my brothers and sisters and you know, ‘what's your name? I like you. I don't like you.' That kind of fun stuff. And so, I used it growing up as a way to connect to people, making things for my friends and family. Yeah. And then I did computer science at uni. I went into games. I was really into computer games for a while, and then I pivoted back to hands-on keyboard developing. So, I was a game designer for a while, and then I went into programming, and then I eventually shifted into team leadership and management. And I spent just over six years at Docker, most recently, building Docker Hub and Docker Desktop for millions of developers, which was great. And then I got bitten by the AI bug like everybody else, I guess there are some bulls and some bears out there, and I'm one of those bulls, and I was so impressed with the fundamentals of what AI could do and the models that I thought I really need to jump in with both feet. So, I joined TurinTech [in] February, and I've been there ever since, building what I hope to be the next generation of developer tools. I'm on a mission to make development more fun, bring the joy back to development, get rid of all this tech debt, and give us some hope again.

Ryan Donovan: Everybody's trying to get that developer joy back into the game. Everybody saw the promise of AI three years ago or whatever it was, but as we get into it, we're not seeing the results. I think there's a stat you all shared that experienced developers are 19% slower when using AI tools. Can you dig into that and your understanding of that?

Michael Parker: Yeah, I think it's actually really uneven across the industry. Software development is so large now. There's so many different types of organizations, and developers, and code bases that I'm starting to see. There's not just one developer that you can build for with one set of problems. There's such a broad range, and there are certainly some cutting-edge small teams that are getting insane productivity with AI, especially when they work on code bases that are in modern technologies. If they're writing everything in Node and Python, and using React as a front end, and you've got two or three developers, and it's a greenfield code base, let's go, right? It's maximum speed. I'm a thousand percent going, but unfortunately, that's not the whole world, right? There's a lot of legacy code. There's a lot of enterprise developers, and LLMs just aren't trained on your internal libraries, and all these ancient versions of things that you might be using, and you can't just get rid of them overnight. We can't just rewrite the world's code into Python and React overnight, right? So, we have to be able to deal with this. This is where I'm seeing the change in the expectations, depending on which edge of the spectrum you're on. So, this 19% is a misleading figure, right? 'cause for some developers, AI is just completely useless, and for other developers, it's like the savior. It's the Messiah that's arrived to save us all and make us 100 times more productive. So, I think averaging is probably not the right thing to do. You know, enterprises that are really struggling, they're really getting alienated by some of the people at the other end of the spectrum who are claiming AI is gonna save them.

Ryan Donovan: With those enterprise code bases, you have so much context that you have to have for any change, right? There's so much code, and then the AI tools – one of the challenges has been giving it that context. Have you found anybody is being more successful? Any techniques to give AI context?

Michael Parker: Yeah, absolutely. There's a few different classes of developers that are emerging here, I think. There's the cutting-edge developer coach. I dunno what this role is called. It's not quite a developer, it's not like a manager; it's like something in between. And they tend to spend all of their time tweaking the factory rather than tweaking the code. And so, when their AI writes bad code, they don't fix it; they fix their prompt, or they fix their rules file, or they build a subagent. And there's all these teams that are hacking together all of these pieces because the system that they need does not exist in the marketplace. We have all these IDs that have chat boxes, and is a chat box the perfect way to interact with a multi-agent development system? I'm not sure, you know, maybe, but maybe there's more to come here. And so, it's interesting to see this role that's emerging that's not development, but I don't know what to call it, but it's very interesting. And how do we build tools for those people, and how do we build tools for developers that stay in development? There's kind of two different problems.

Ryan Donovan: And I've heard of people building style guides for AI Agent coding tools. They're building specs for stuff they're trying to build, getting incredibly detailed in the prompts, where what you're doing is essentially writing code without writing code, right? Saying things like a conversation I had the other day, ‘iterate over this four loop to produce this.' That's basically code.

Michael Parker: Yeah. And it might listen to you, and it might not listen to you. This is the magic and also the pain of AI. You give it instructions, and sometimes it listens, and sometimes it doesn't. And sometimes the instructions you give it are not really quite specific enough, and it gets confused. And maybe there's some context that you have in your brain that you didn't realize it didn't know, and it's not in its context window, and it makes some assumptions. And so, this is where we really need to build better tools with memory, and so it understands you and your organization, and your different projects, so it can actually do sensible things. And when I think about this problem, there's four places I think we need to fix planning upfront so you can give it better context to give it [a] better chance of achieving sensible outcomes. Then there's the coding, which is the babysitting. It's made your draft, but you need to fix it; you need to reiterate. Then there's a reviewing at the end, and you might do that in GitHub or a different review system. And then there's ongoing maintenance that always has to happen, right? Your dependencies go out of date, your libraries need updating, you change your code styling, and you need to do some refactoring. And so, I think like we need better tools at each four of these steps.

Ryan Donovan: They're not the same tools, and not the same problems. With planning, that is an age-old problem. I've had several articles we've published about requirements being the central problem of software engineering, and requirements changing and just being a moving target, and just getting people to have a solid plan in the beginning. Is there a way to be better at that in the AI age?

Michael Parker: Yes, absolutely. We're trying to work on this problem right now with Artemis. We are building a set of planning agents, 'cause I think the next generation of AI tooling is gonna move away from an individual having an AI tool to help them do their job, and we are gonna start seeing more teams with humans and AI collaborating together as more of a team as these AIs start infiltrating our social circles, and our communication platforms. I'm hoping I'm not talking too dystopian here, but I think we need to start building tools that can manage teams of AI, and so what we're trying to build is more like a planning team. So, we think that planning has at least three different steps: there's the requirements gathering, which is what do you want this feature to actually do? Who is it for? And then there's technical requirements. So, we have a software architect persona, which decides on frameworks, and libraries, and that kind of thing. And I think you need that separately from requirements, right? And depending on who's using the agent, they're gonna have expertise in maybe one or the other, or maybe none of them, and so, you might be able to fill in the gaps on one and then get advice and education from the other. So, one of the key pieces of collaboration in a real human team is around product managers, designers, engineers in this triad of different expertise, but that's quite inefficient if you're constantly double-checking all of your decisions with all these other humans. So, we think AI can play a role in helping you quickly sanity check what you're doing, either as a product manager talking to an engineering agent, or an engineer talking to a product manager agent. And of course, all these agents have to have context around what you're doing. And so yeah, I think absolutely we can build better planning agents and better planning tools, and I think there's a lot of interesting work in the market. Everyone's got a slightly different take, and we are seeing a bit of the spectrum development flow, and then we've got Kiro from AWS, and Cursor Claude Co. coming up with these different types of planning. What we are really focused on is dealing with uncertainty and asking very specific questions, and then educating the user. So, one of the things I miss when I'm using AI is the lack of learning. Too often, I feel like I'm in the backseat of a Ferrari with broken steering, and it's just smashing down the motor, and I'm like, ‘where are we going? Like, I think I know…'

Ryan Donovan: Yeah, you just press a button and out comes whatever the product is.

Michael Parker: Right. And maybe we'll crash, or maybe we'll get there fast. I don't know. And then, if we do get there fast, have I learned anything? Am I leveling up? And this is a real problem for junior developers entering the market. How are they gonna level up? The hiring's starting to go off a cliff, and how do we help people keep learning? So, if I ask AI a question or it asks me a question, I want a set of options, right? Did you know about this, and this? This is good for this reason. This is good for this reason. Are you building a quick prototype? Are you building something for an enterprise thing? How much scale do you want? You know, all of these decisions should ultimately be taken by a human, but they should be educated along the way.

Ryan Donovan: Because you know, the devil is in the details of those things, right? You can come up with the scaling plan, you can come up with the requirements, but then, the implementation – if you just hand that off to an AI, do you think it'll be big problems? Like you said, we don't know if it'll work. We don't know if it'll crash.

Michael Parker: Yeah, and I think this is one of our big questions that we have to answer as an industry. In two or three years, do we think we'll get to a state where AI can get it right first time? So, if the answer's yes, then we should put all of our effort into upfront planning and getting it the right context, the right memory, and give it the perfect context window, so the code is perfect when it comes out. I don't think that is gonna be the case. I am more of a skeptic on that front, and one of the reasons is that, firstly, even if it starts outputting perfect code in two years time, what about the rest of the code? We don't have perfect code everywhere else, right? So, we still have that. And then there's maintenance work, like I mentioned earlier. Even if it's perfect today, it might be out of date tomorrow. So, I think we need to start implementing different things. We need better planning for sure, but we need some better way of doing maintenance work, refactoring updates, and that's the kind of the work that humans don't really enjoy doing. So, I really want to start taking that out of the equation of humans.

Ryan Donovan: Nobody wants to upgrade Java.

Michael Parker: Right. Or do a Python 2 to Python 3 migration. Geez.

Ryan Donovan: Right.

Michael Parker: Let's try and get humans doing the creative problem-solving work, get 'em to stay in flow so they can have a good time, and then have AI fix it afterwards and do the boring, mundane stuff. And at the moment, I think too often it's the other way around, right? AI's doing the fun stuff, and then we are left reviewing thousands of files, and it's painful, right? Because everyone prefers writing code than reviewing code.

Ryan Donovan: The tech debt stuff, you know, that often sits around. I vetted companies where they've stayed on the same version of Java for years, but talk to somebody at AWS, and they did a Java upgrade that they say saved 6,500 developer years. I think it was based on the amount of lines of code that it would change. They have huge code bases, right? It's all about having these very structured upgrade paths. We're talking about AI coding getting better, AI code reviewers, these sort of maintenance things with structured upgrade pass. It seems to me that the planning part is gonna become the most important part of software engineering. What do you think about that?

Michael Parker: I think it's so hard to predict. Everything is moving so fast. I think we've been playing with planning agents for about six months, and it really depends who it is. There's some developers who want to build brand new applications, and they have a vague idea of what they want, and a really long Q&A requirements exploration process. They love it. It's fantastic, like, ‘where do you wanna deploy this? You could use this, or you could use that. And here are the trade-offs.' And it's, ‘ do you want me to add AI to this? And what tokens do you wanna use? Ah, this is great. It's building this whole thing.' And then, you have someone who's very sure exactly what they want. And when you get to question three, they're like, ‘stop asking me questions. Just do it. I told you what to do. Go get the context and get on with it.' So, I think it's really hard to build a tool that sort of satisfies everyone. Andrej Karpathy said something interesting a while ago about a– what was it, like an autonomy slider for these tools, where it's like, how much autonomy do you want to give it? And maybe there's something in there about learning each user's preference for how much autonomy to give these AI tools.

Ryan Donovan: It almost [reminds me of what] Stephen King said about writers: that some people really [are] plotting it out and having everything set when they start writing, and some people like going by the seat of their pants and figuring it out on the fly.

Michael Parker: Yeah. And also, a lot of people wanna get to a quick prototype just to see what AI would do. So, one of the things we're exploring is this idea of preemptive prototyping. So, as you are planning, we can have another agent go generate some code, and so you can see, ‘if I just stop planning what the application would look like,' and here's the technical rules I would use, here's the libraries, here's the technologies, here's the structure I'll build. And so, you can go backwards and forwards on like, how much detail do I need in this plan? Because if you don't know what it will do when you finish, you don't really know how much more you have to tell it.

Ryan Donovan: Yeah, you get to see the fast feedback loop, and that's something I keep hearing about software development in general is that it's better to have this fast feedback loop. You have this fast CICD cycle. Are there other ways in the AI code pipeline that we can encourage faster feedback?

Michael Parker: Software development as a whole might change in its approach, especially around the quality a pull request needs to be before merge. Over the weekends, I'm vibe coding a game with my 9-year-old as a way of introducing him to technology, and I don't know whether to teach him coding or not. I don't know if that's gonna be a valuable skill, but it's interesting. We are not writing any code ourselves, right? We're just talking to AI, and we're just building this game. And there are moments when we are in flow. We are thinking about an idea, and we can just build it in minutes. We have this little snake game. We have this multiplayer. We got two things, and we're fighting, and he's like, ‘ hey, let's add a power-up, and we can be invincible for five seconds.' I'm like, ‘okay, make it invincible.' And then it happens, and we play it, and it's so magical that we can build at the speed of thought. That's how fast it is. But then you hit a roadblock, and you give it a prompt, and it just breaks, and the whole thing stops. And then, you know, 9-year-olds get bored very quickly, more than developers, right? He just wanders off. Now I need to refactor, and I look at the code, and there's like a 2000-line file, and I'm like, ‘I don't want to do that anymore. I don't want to refactor this.' I can talk you through it, but AI should be doing that. Imagine a world where you can vibe the creative stuff yourself, and then pass it off to AI, and AI will just fix it. That would be my perfect vibe coding world, that: I can merge whatever rubbish I want, and AI just comes along and sweeps up after me, and it cleans up, and it's like, ‘ Mike, I know what you mean. Let me just put it into some proper modules.'

Ryan Donovan: AI doesn't necessarily build for maintainable, readable code. One of our junior writers here, she doesn't have much of a development background, and got her to start Vibe coding. She says, ‘ I built something,' and then she showed it to her software developer friends, and they were like, ‘what the hell is this function doing right?' And then, like you said, you don't want to do that refactoring thing, but a lot of things I see is that seems to be the future role for developers is code review and refactoring.

Michael Parker: Yeah. And developers are really sad about this. I talk to so many developers, and I actually think some of them are going through grief. I think it's a joke, but also pretty real, like the five stages of grief with denial, and anger, and bargaining. I see developers in each of these buckets, right? Denial, this AI thing will go away. I just need to ignore it. Real developers are [like], ‘it's just creating crap,' or they're angry about it. Like, why is my CTO forcing me to use this AI stuff? It never works. Or they're bargaining. So, I think different developers are in different buckets, here. One thing is clear: they're not having as much fun as they used to. Most of them. I know some of them are, but a lot of them are like, okay, now I'm waiting for two minutes for AI to finish, and then it finishes, and it's nothing near what I wanted, and it's ignored my rules. I told it what to do. It didn't do it. It's just gone down a rabbit hole. It's doing this stupid escape hatch stuff, not checking for errors properly, and I don't wanna be reviewing thousands of lines of code that doesn't make any sense. It's not what I expected. I was talking to one developer last week. He said, ‘I used to be a craftsman whittling away at a piece of wood to make a perfect chair, and now I feel like I am a factory manager of Ikea. I'm just shipping low-quality chairs.' And yes, it's faster, and the chairs are fine, but that craft is now escaping him, and he feels sad about it.

Ryan Donovan: It's a larger issue with the ways of the world and the movement of business, right? Everybody's moving from craftsman to mass production. This is a larger question, I think.

Michael Parker: It's a big question, right? And who do we build for? What does the developer role look like in two or three years time? Do we build tools for how the role is now, or how it is then? And how are teams gonna be made up? Are you still gonna have product managers, and designers, and testers? And how are these AI agents all gonna be intertwined? It's very difficult, and we see a lot of friction between heads of engineering who can see the future of these AI tools. And then all the developers – they're happy with their job. They like developing, they like writing code. Why do I have to give up writing code? I've been doing that for 20 years, and this code is not as good as mine. So, I think it's difficult.

Ryan Donovan: Yeah, I think it's difficult, and I think in a future where you could just be like, ‘hey, I need a whiteboarding app,' or whatever little thing you need, like, all of these little niche companies, what's their role?

Michael Parker: Yeah. And how many developers do you need in the future? And does everybody want to work in a two-person company? And this has been the direction of travel for a long time, right? As technology gets better, it empowers the world to do more with less. So, people all over the world, you know, in Africa and India and China, they're all like developing applications now, and you can deploy on AWS with infinite scale in minutes. And this is all amazing. Is this just a continuation of that trend where you can do more with less? And then what does that mean for a department, right? Do you split your five teams into 20 teams, and you know, how do you manage that? That's an organizational problem.

Ryan Donovan: On the flip side of that, I think with times of greater automation, there have found ways to grow into different jobs, different roles, different ways of working. It is certainly weird times, right?

Michael Parker: I'm having probably more fun than most, right? Because I moved into management. I'm less attached to the coding part of it. I still code for fun, but I've barely written any lines of code manually this year, and this could be the last year that I ever write any code manually. You know, I do still do it sometimes 'cause it's just quicker to edit the title of a button or something. I can just cut, search for it, and change it. It is actually just about quicker for me to do it, but with a better setup, you know, maybe not, if I can talk into a microphone and just tell it what to do, then maybe that's quicker. But yeah, there's definitely new roles opening up. It's a different set of skills, and this set of skills is not gonna be stagnant, right? All of these cutting-edge groups of people that are getting really good at building subagents, and prompts, and tools, and connecting MCP servers, that's because developer tooling is not there yet. We still just have a chat box in an IDE for the most part. This is phase one of AI developer tooling. Everyone just gets a chat box added to the application. That's not the end, right? The terminal was not the end of operating systems. So, we're gonna see an explosion of much better, more usable AI tools to bring this AI to the masses. And that's when it gets really exciting because a lot of these problems and these irritations go away, and then hopefully developers will start seeing the lights, and saying, ‘actually, it's not so bad. It's not gonna produce all this terrible code, and I can have it automate this awful Python framework upgrade, and keep my libraries up to date, and fill in the unit test gaps that I can't be bothered to make, and refactor these big files automatically, and start messaging me.' Like, ‘do you want me to do this work?' I can't wait till we have proactive agents that don't just sit in the corner waiting for me to tell them what to do, but they actually invent their own work based on the context and the world events, and that's when it gets really interesting.

Ryan Donovan: For developers who are nervous about the future, a little reticent, what's your advice for scaling up, for getting ready for what is the future?

Michael Parker: I don't know what the future holds, so it's hard to give advice. But I would say everything that you've learned over your career will not be wasted. Do not get sad about it, but don't bury your head in the sand. Problem solving, problem decomposition – these are things that will forever be useful in every walk of life. And some people forget that software development has been about learning new things forever. There has not been a time in history where software development has not had something new that year. You know, we went through the internet, and smartphones, and cloud native, and there's a lot of companies that are still learning cloud native, right? They still have big on-prem servers, and they're trying to migrate different types of databases, non-SQL, and scaling, and serverless. You know, are you gonna use Docker containers, and Kubernetes, and how much are you gonna run on your local machine and CI? There's all this stuff that we've been learning, and whilst this is a bigger shift, it's not gonna be the last shift, right? I guess I would just advise, try to get out there a little bit. So, I do see some filter bubbles, where some people in the bull bubble [are] like, ‘AI's gonna take over the world. 2027, the singularity is coming. AI's just gonna write all of the code by the end of next year.' And then there's the other bubbles where it's like, ‘AI's terrible, it will never work. Our jobs are safe, guys.' These bubbles need to start mingling a bit, right? You know, if you're a developer stuck in that latter bubble, start speaking to some people that have been experiencing some of the speed up. And then if you're in the former bubble, try to talk to the big enterprises where AI really is not capable of solving their problems. So, I think we all need to talk a bit more, get out of our bubbles, and staying a little bit up to date with tooling is always a good idea. Even though it's moving at a crazy pace. I talked to a developer a few weeks ago who said, ‘I'm so terrified by the rate of change, I'm just paralyzed, and I'm just waiting for the winner. I don't feel like there's any point learning something today because it'll be obsolete tomorrow.' You know, why should I spend my time doing it? I'm not sure about that. I think a few hours a week learning about prompting, about subagents, about rules, files, about the latest features in just a few of the top AI tools, I think, is gonna be immensely valuable, just so you know, what's around and what people are doing. Because if you stick your head in the sand for a few months, or even a year, in this environment, you're gonna start looking quite out of date. So, you wanna be in the best position when the new thing comes around.

Ryan Donovan: Yeah, because the current concepts, techniques are the foundation for the next level, right?

Michael Parker: So, what you have to do with a subagent today, and you have to code it all manually and prompt it manually, is likely to come out of the box tomorrow. And so, then you can start deprecating those subagents, and you understand what they're for, what problem they're solving. Because you know, I've spoken to some developers who haven't really tried any new AI tools in nine months. And so, ‘ ah, yeah, it just auto-generates the wrong thing.' And I'm like, ‘ that was true in March, but things have moved on.' So, yeah. One thing that I'm interested in is team flow. So, I think a lot of AI companies at the moment are optimizing for individuals, and I think that's a good place to start. Like I said earlier, developers are experiencing breaking flow. So, they prompt, and then they wait, and then they review, and then they wait, and it snaps them out because they built something that they weren't expecting. And so, we need to solve that problem at the individual level. But there's something more magical in software teams – really great teams that I think we need to think about in the AI space. It's great to think about a magical individual in their bedroom writing millions of lines of code on their own like a genius, but I actually think if you really want high performance out of a department or a team or a business, it's really more about creating great teams. And that's one of the things I learned at Docker is if you want great products, build great teams. And so, how do we build great teams with AI? How do we get that feeling where you're in a room together and you are all on the same page? You have a vision, and you are whiteboarding. You've got ideas flying around, you've got an idea, I've got an idea, we're bouncing off each other, and it's energy, and it feels like the time just slips away. And you can achieve that as an individual just coding, but if you want a team to be more than the sum of its parts, then you need to get that team flow. And so, where does AI fit into that, I think is a really interesting question.

Ryan Donovan: I talked to somebody a couple weeks ago who wrote a book about social knowledge, and the way that was the spark that brought science into view, like writing letters to other people. And I think building that social knowledge pipeline within an organization will be world's more productive than just getting the best developer.

Michael Parker: Yeah, absolutely. AI systems that have memory, and they can learn, and every person they talk to, they get smarter, and they can remember what happened, and who's who, and what the different expertise is, and they can change what they're suggesting based on who they're talking to. Then, they start acting like more of an employee of the organization rather than this day contractor who just comes in with no knowledge and makes a mess and leaves, right? That's what we need. We need these AIs to be more part of the team and helping to bridge people together.

Ryan Donovan: And maybe they could be dispersing that information that they've been ingesting.

Michael Parker: Exactly. Yeah. Like answering questions for people, helping people get educated, helping onboard people. Yeah, AI is perfect for that.

Ryan Donovan: Alright. It's that time of the show where we shout out somebody who came on to stack Overflow, dropped some knowledge, shared some curiosity, and earned themselves a badge. Today, we're shouting out the winner of a Stellar Answer Badge. Somebody came onto stack our overflow and dropped an answer that was so good, it was saved by 100 people. So, congrats today to Adam Franco for answering, ‘How can I delete a remote tag?' If you're curious, Adam has a great answer for you in the show notes. I'm Ryan Donovan. I edit the blog, host the podcast here at Stack Overflow. If you have questions, concerns, comments, topics to cover, whatever, email me at podcast@stackoverflow.com, and if you wanna reach out to me directly, you can find me on LinkedIn.

Michael Parker: Hi, I've been Mike Parker, the VP of Engineering at TurinTech. You can find me on LinkedIn at Michael Parker Dev. We've just launched our developer preview program at Artemis with free credits, so please come try it at turintech.ai, and let me know what you think. We love all feedback.

Ryan Donovan: All right. Thank you for listening, everyone, and we'll talk to you next time.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *