Interview: How AI Is Transforming Angular Development (Transcript)

Recently, I interviewed Daniel Sogl, who specializes in AI for software developers and has spoken about this very topic at various conferences. We discussed how our work will be influenced by AI in the short, mid, and long term:

What tools are currently in your AI toolbox?

The front-runner is clearly GitHub Copilot – mainly because it integrates so well in enterprise environments. It works in almost any IDE, although the features vary. I also use Cloud Code, which is CLI-based. And of course, ChatGPT – probably the most accessible tool. My grandfather, who's over 70, now codes in Python with it. There are tools like Cursor, Aider, Claude – lots of options. But many tools start to look alike. Features are quickly copied from one to another, so the differences are often minor.

You have hands-on experience using AI with Angular. What are your key takeaways?

Angular is tricky because it has changed a lot in recent years. React, in comparison, has remained more stable. LLMs like ChatGPT were trained on publicly available code, much of which is outdated. So, when you ask them to generate Angular components, you often get a weird mix of modern and old syntax. For example, you might get a standalone component that still uses *ngFor. That frustrates many Angular developers. But there are solutions – such as instruction files that bring LLMs up to date.

Do you have a specific example where AI saved you real development time?

Yes, for example with the NGRX Signal Store. I created an instruction file that explains how we structure our stores. With that in place, I can ask Copilot to generate a new store, and in about 90% of cases, it gets it right. AI also helps me when switching between client projects. I use it to get a quick understanding of the domain or the codebase – super helpful for debugging or onboarding.

How do these instruction files work exactly?

It depends on the tool. In Copilot, it’s .instructions.md, in Cursor it's .cursor-rules, and in Cloud Code it's .cloud.md. Basically, they are markdown files with coding rules that get appended to every prompt behind the scenes. This way, you can teach the LLM which Angular APIs to use and how to follow your project’s best practices.

How often do you use AI in your Angular workday?

I use it regularly – for writing stores, services, unit tests, and simple UI components. Especially for repetitive tasks. When it comes to more complex debugging or advanced UI design, I tend to do it manually but still use AI to brainstorm or suggest approaches.

Ever been stuck in a prompt loop where the AI keeps getting it wrong?

Definitely. That’s known as "Wipe Coding" – constantly prompting “please fix” without progress. Sometimes it’s better to close the tool and debug it yourself. That frustration tolerance is important, especially for junior developers.

Any tips for using AI in debugging?

Yes – MCP servers are a great way to enrich context. These are tools that let your LLM access external systems like GitHub, Confluence, or even logs. For example, an LLM can analyze a GitHub issue, pull logs, and check documentation to help understand why an acceptance criterion might not be met. It can save a lot of time.

GitHub claims 55% productivity gains. Do your experiences align with that?

Not really. We only spend about a third of our day actually coding. So even if AI saves 30% of that time, that’s only a 10% gain in the total workday. It’s still meaningful, especially because it reduces cognitive load. But let’s stay realistic.

What about junior developers? How does AI affect them?

Two things: First, we still need developers who understand what AI-generated code actually does. Even if we don’t write all code ourselves anymore, we need to review and maintain it. Second, AI can help juniors learn – like a personal teacher that answers questions. But over-reliance can lead to laziness. Developers need to keep building their problem-solving skills.

And how about code quality?

Instruction files help here too. You can define your linting and formatting rules in them. That improves the baseline quality of generated code. But LLMs still duplicate logic or over-engineer things. So static code analysis remains essential.

When something’s wrong – do you refactor manually or prompt it to fix it?

Depends on the case. If it’s quicker to do manually, I just fix it. If it’s a recurring problem, I update the instruction file. You have to weigh what’s faster – prompting or typing.

Have you seen AI-generated bugs that looked fine on the surface?

Yes – all the time. The AI tells you “done,” but then ng serve fails. LLMs don’t validate their output unless you explicitly ask them to run a linter or build. So testing remains critical.

Looking ahead: How should we prepare for autonomous AI development?

Big topic. Autonomous AI agents are already being rolled out. GitHub Copilot, for example, can now fix issues by itself. For now, this works best on small tasks like updating translations. More complex changes still fall short. But the trend is clear: More and more tasks will be handled by AI. Developers will move into roles like technical supervisors – guiding architecture and reviewing AI-generated code.

What can I do if my dev team is skeptical about AI?

Be transparent. Set realistic expectations. Don’t promise 50% productivity gains in a month. Identify repetitive tasks where AI can help – like store generation or Swagger doc updates. And most importantly: developers stay in control. It’s a co-pilot, not an autopilot.

You also offer a workshop. What can participants expect?

It’s a two-day workshop focused on understanding which tools are available and how to use them effectively – especially GitHub Copilot. We go deep into instruction files, prompt engineering, MCP servers, and even how to build your own MCP server. The goal is to give developers the tools and knowledge to apply this in their own workflow.

Workshop Details: AI for Developer Productivity

What should every Angular developer know about AI – at a minimum?

How LLMs work and how they’re trained. That explains why they sometimes produce outdated or irrelevant results. The specific tool doesn’t matter as much – understanding the foundation is key.

What’s one concrete thing our readers should try out right after this interview?

Two things, actually:

  • Learn about Custom Instructions – they massively improve output quality.
  • Check out MCP Servers – they bring AI tooling to the next level by giving LLMs access to contextual data.

More on This

Check out our workshop on this topic:

Workshop Details: AI for Developer Productivity