Martin Stühmer

I’m Martin, a software architect and developer from the Stuttgart region who’s been building .NET systems since the Framework 2.0 days. Nearly 20 years later, I’m still here—not because I’m stubborn (well, not only), but because the .NET ecosystem keeps delivering value when you cut through the hype and focus on fundamentals.

My work centers on a simple premise: quality isn’t optional, and buzzwords don’t ship features. As Director Consulting Services at CGI, I work with teams building cloud-native solutions on Azure and modern .NET. I help organizations navigate technical decisions using Risk and Cost Driven Architecture (RCDA)—a methodology that brings systematic risk and cost analysis to architecture choices instead of relying on “best practices” pulled from conference slides.

What I Actually Do

Enterprise Architecture & Development: I design and build systems that need to scale, perform, and stay maintainable after the consultants leave. My focus spans the full stack—from Azure infrastructure and DevOps pipelines to .NET application design, performance optimization, and the unglamorous work of making legacy systems coexist with modern approaches. I’ve worked with teams ranging from startups to global enterprises, always with the same goal: shipping production-ready software that solves real business problems without accumulating crippling technical debt.

The work involves hard choices—choosing boring, reliable technologies over exciting new ones when stability matters more than innovation. Designing for the team you have, not the team you wish you had. Building systems that can evolve incrementally instead of requiring big-bang rewrites. It’s less glamorous than conference keynotes make it sound, but it’s what keeps production systems running.

Training & Knowledge Transfer: As a Microsoft Certified Trainer and IHK-certified instructor, I train developers and architects who want depth, not surface-level tutorials. Whether it’s cloud architecture, modern .NET features, or pragmatic testing strategies, my goal is equipping teams to make informed decisions rather than following trends blindly.

My training approach comes from spending years cleaning up messes created by applying “best practices” without understanding context. I don’t teach dogma—I teach judgment. Students learn to evaluate trade-offs, understand when to break rules, and recognize that most software engineering advice comes with implicit assumptions about scale, team structure, and business constraints that may not match their reality.

Open Source Contributions: I maintain several NuGet packages focused on solving real problems I’ve encountered in production systems. Code quality, testability, and developer experience drive these projects—not feature counts or GitHub stars. Each package exists because I’ve repeatedly solved the same problem across different projects and decided the ecosystem needed a better solution than copying code between repositories.

These contributions aren’t just code dumps—they’re maintained, documented, and tested because I actually use them in production. When you depend on one of my packages, you’re getting something that’s survived contact with real-world requirements, not a weekend experiment that seemed like a good idea at the time.

Writing & Teaching: Through this blog, I share perspectives on software engineering that challenge conventional wisdom when warranted. Topics range from .NET evolution and Azure patterns to technical debt management, quality practices, and why “Clean Code” has become empty lip service in many organizations. If you’re looking for balanced, experience-driven insights rather than cheerleading for the latest framework, you’re in the right place.

I write about topics that matter for teams shipping production software—the messy middle ground between academic computer science and “move fast and break things” startup culture. Expect posts that question whether you actually need that microservices architecture, analyze .NET performance characteristics with benchmarks not hand-waving, and explore how to incrementally improve legacy codebases without grinding development to a halt.

My Technical Philosophy

After nearly two decades, here’s what I’ve learned:

  • Fundamentals over frameworks: Languages, patterns, and principles outlast specific tools
  • Quality over velocity: Analyzers, tests, and code reviews prevent more bugs than heroic debugging sessions
  • Context over dogma: “Best practices” depend on team size, domain complexity, and business constraints
  • Pragmatism over purity: Perfect architecture that ships in six months loses to good-enough architecture that ships in six weeks
  • Evidence over opinion: Measure performance, validate assumptions, benchmark alternatives

I’m skeptical of buzzword-driven development, hostile to cargo-cult practices, and allergic to “because everyone else does it” as technical justification. But I’m also pragmatic—sometimes the boring, proven approach is exactly what your production system needs.

The Journey: From Framework to Modern .NET

I started with .NET Framework 2.0 in an era when SOAP web services were cutting-edge and ORMs were controversial. I’ve lived through the rise and fall of Silverlight, WPF’s promise and reality, WCF’s complexity, and the awkward adolescence of ASP.NET Web Forms. I witnessed the .NET Core revolution—Microsoft’s bet-the-company move to open-source, cross-platform development that paid off spectacularly.

This long view shapes how I evaluate new technologies. When someone pitches the latest JavaScript framework as the future, I remember when that was Angular 1.0. When architectural patterns become trendy, I recall which ones survived the test of production systems and which ones became cautionary tales in architecture horror story sessions.

Modern .NET (.NET 6, 8, 9, and now 10) represents the maturation of Microsoft’s platform strategy—combining the stability and enterprise-readiness of .NET Framework with the performance and innovation of .NET Core. But experience teaches caution: every new feature, pattern, or framework comes with costs that conference talks rarely mention. My job is helping teams understand those costs before they commit.

What Drives My Work

Making Technical Debt Visible: Too many organizations treat technical debt as an invisible problem until it becomes an existential crisis. Through RCDA, quality metrics, and transparent architectural decision-making, I help teams quantify, prioritize, and systematically address debt. Not by stopping all feature work for mythical “cleanup sprints,” but by integrating quality improvements into the regular development flow.

Elevating Quality Standards: I’ve seen too many teams accept buggy, untested, unmaintainable code because “we need to ship fast.” This is a false choice. Static analyzers catch bugs in milliseconds. Well-structured tests prevent regressions automatically. Code reviews transfer knowledge and catch design flaws. These practices don’t slow teams down—they prevent the constant firefighting that actually kills velocity.

Bridging Strategy and Implementation: Many architects design ivory-tower systems that fall apart when real requirements and real developers get involved. Many developers focus solely on code without understanding business context. I work in the gap between these extremes—translating business needs into technical strategy, and technical constraints into terms business stakeholders can evaluate.

Published blogs

Your [Authorize] Attribute Is Compliance Theater

Your [Authorize] Attribute Is Compliance Theater

Your [Authorize] attributes give you a false sense of security. ISO 27001 auditors see right through it.

I’ve reviewed dozens of ASP.NET Core apps that authenticate flawlessly — then scatter role strings across business logic, skip audit logs, and wonder why they fail compliance. Here’s the pattern that kills audits, and how to actually fix it.

Why ISO Standards Actually Matter for .NET Developers

Why ISO Standards Actually Matter for .NET Developers

Cloud-native .NET development has transformed ISO/IEC 27001, 27017, and 27701 from abstract compliance requirements into concrete daily coding decisions. This guide shows .NET developers how security standards directly map to Azure Key Vault integration, Azure AD authentication, and proper logging—with real code examples demonstrating compliant vs. non-compliant implementations.
Real Professional Software Engineering in the AI Era

Real Professional Software Engineering in the AI Era

Throughout this series, we’ve established that AI-generated code without understanding creates productivity illusions that collapse in production (Part 1), and that the feedback loop between code and reality—compilation, testing, profiling, production—sharpens thinking in ways AI can’t replicate (Part 2). Now we confront the practical question: What defines professional software engineering when code generation becomes trivial? This final part examines the irreplaceable skillset: understanding execution characteristics (recognizing allocation patterns that cause GC pressure before deployment), asking questions AI can’t formulate (What’s the failure mode when this service is unavailable?), recognizing when plausible AI solutions diverge from correct ones, debugging production failures AI has no execution model to reason about, and evaluating maintainability for code that becomes tomorrow’s burden. We explore why prompt engineering optimizes for speed while architecture optimizes for survival, why “AI productivity” often means faster technical debt accumulation, and why the economic reality favors organizations that measure system reliability over lines of code generated. The feedback loop can’t be automated because closing it requires learning from production failures and applying that knowledge to prevent future ones—the irreplaceable discipline that defines real professionals in 2026 and beyond.
Stoßlüften: The Architecture of Intentional Resets

Stoßlüften: The Architecture of Intentional Resets

A Swabian habit teaches a DevOps lesson: open windows fully and often, or invisible decay accumulates. Stoßlüften isn’t about comfort—it’s about forcing systems to prove they’re healthy. Regular restarts, infrastructure-as-code, and reproducibility checks catch the problems that green metrics miss.
The Feedback Loop That AI Can't Replace

The Feedback Loop That AI Can't Replace

In the first part of this series, we established that AI-generated code without understanding creates an illusion of productivity that collapses under production load. The differentiator isn’t typing speed—it’s the feedback loop where code meets reality and exposes incomplete thinking. But what exactly is this feedback loop, and why can’t AI replicate it? Modern compilers validate logical consistency, catching gaps pure thought leaves unresolved. Profilers expose the 75x performance difference between “seems reasonable” and “actually performs.” Production environments reveal every assumption abstract thinking deferred—scale, concurrency, failure modes. This article explores the mechanisms that transform vague reasoning into concrete understanding: compilation validates logic instantly, testing catches behavioral mismatches, profiling measures what abstract analysis guesses, and production exposes the cost of every deferred decision. Real professionals don’t just write code—they master the iterative discipline of watching it fail, understanding why, and refining their thinking. AI participates in parts of this loop, but it can’t close it. That’s where professionals remain irreplaceable.