Power of Ten Rules: More Relevant Than Ever for .NET
When someone handed me Gerard Holzmann’s “Power of Ten” rules and asked whether they still apply to C# 10/.NET, my answer was immediate: Absolutely, and we can enforce them better than Gerard Holzmann could have dreamed in 2006. The Power of Ten originated at NASA’s Jet Propulsion Laboratory for safety-critical C code in spacecraft systems. Not academic theory—principles where violations kill people. NASA’s 2010 technical assessment of Toyota’s electronic throttle control found 243 violations contributing to unintended acceleration deaths. When prosecutors examine catastrophic embedded failures, these ten rules are the baseline for “did you even try?”
What’s changed in two decades? In 2006, following these rules meant discipline, manual code reviews, and static analysis tools at $5,000 per seat. You verified bounded loops by hand, tracked pointer indirection with spreadsheets, enforced function length through policy documents nobody read. I’ve reviewed enough legacy embedded code to know most teams failed. The tooling wasn’t there, and manual enforcement doesn’t scale past three developers. Modern C# flips this completely. Roslyn analyzers catch violations at compile time—before code review, before testing, before anyone debugs at 2 AM wondering why the production system locked up. Nullable reference types enforce null-safety that C developers achieved through religious discipline and hope. The type system makes pointer arithmetic bugs physically impossible. Built into the compiler, zero additional cost, enforced automatically.
The catch—and there’s always a catch—is that not every rule translates directly. The managed runtime fundamentally rewrites what’s dangerous and what’s safe. Dynamic allocation is forbidden in embedded systems but powers every .NET framework feature. Recursion crashes spacecraft but handles expression trees beautifully when the CLR manages your stack. The real question isn’t “do these rules apply to C#?” It’s “how do modern language features enforce the underlying principles better than 2006 C ever could?” Let’s examine each rule systematically.
Rule 1: Avoid Complex Control Flow (No goto, setjmp/longjmp, Recursion)
Original Intent: Simplify static analysis and prevent unpredictable control flow in embedded systems.
C#/.NET Applicability: Partially valid, but context matters.
What Still Applies
goto remains controversial in C#, though the language supports it. I’ve seen developers defend it passionately in code reviews, and honestly, for breaking out of deeply nested loops, it’s occasionally the least-bad option. But “occasionally” is doing a lot of work in that sentence:
// Acceptable use case: breaking out of nested loops
private bool TryFindPosition(int[,] matrix, int target, out (int row, int col) position)
{
for (int i = 0; i < rows; i++)
{
for (int j = 0; j < cols; j++)
{
if (matrix[i, j] == target)
{
goto found;
}
}
}
return false;
found:
return true;
}
But this is clearer:
// Better: extract to method with early return
private bool TryFindPosition(int[,] matrix, int target, out (int row, int col) position)
{
for (int i = 0; i < matrix.GetLength(0); i++)
{
for (int j = 0; j < matrix.GetLength(1); j++)
{
if (matrix[i, j] == target)
{
position = (i, j);
return true;
}
}
}
position = default;
return false;
}
What’s Different
Recursion tells a completely different story. The JIT compiler doesn’t guarantee tail-call optimization like F# does. Still valuable for tree structures, parsing, algorithms where iterative alternatives become unreadable spaghetti. Key difference? Stack overflows throw exceptions you can catch and handle. In embedded C, stack overflow crashes the spacecraft. No recovery, no logging, just silence.
// Recursion is perfectly acceptable for bounded structures
public TreeNode? Find(TreeNode? node, int value)
{
if (node == null) return null;
if (node.Value == value) return node;
return Find(node.Left, value) ?? Find(node.Right, value);
}
For deep recursion, modern C# offers alternatives:
// Convert to iteration with explicit stack
public TreeNode? Find(TreeNode? node, int value)
{
if (node == null) return null;
var stack = new Stack<TreeNode>();
stack.Push(node);
while (stack.Count > 0)
{
var current = stack.Pop();
if (current.Value == value) return current;
if (current.Right != null) stack.Push(current.Right);
if (current.Left != null) stack.Push(current.Left);
}
return null;
}
Verdict: Use recursion where it makes sense, but be aware of stack depth. Avoid goto unless you have a compelling reason. The .NET runtime gives you safety nets that embedded C never had.
Rule 2: All Loops Must Have Fixed Upper Bounds
Original Intent: Enable static analysis to prove termination and prevent infinite loops.
C#/.NET Applicability: The principle is sound, but enforcement differs dramatically.
Modern Interpretation
Provable termination is still the goal, but .NET gives you runtime safety nets that embedded C never had:
// Bad: unbounded loop
while (true)
{
var item = queue.Dequeue(); // What if queue is empty?
Process(item);
}
// Better: explicit bound with timeout
var cts = new CancellationTokenSource(TimeSpan.FromMinutes(5));
while (!cts.Token.IsCancellationRequested)
{
if (queue.TryDequeue(out var item))
{
Process(item);
}
else
{
await Task.Delay(100, cts.Token);
}
}
Modern .NET provides powerful static analysis through Roslyn analyzers that can detect unbounded loops during compilation. Enable nullable reference types and the latest analysis level to catch these issues early:
<PropertyGroup>
<Nullable>enable</Nullable>
<AnalysisLevel>latest</AnalysisLevel>
</PropertyGroup>
Verdict: Always use bounded loops. Modern C# makes this easier with foreach, LINQ’s Take(), and cancellation tokens. The difference from embedded C: Your loops can be bounded at runtime with safe defaults rather than requiring compile-time constants.
Rule 3: No Dynamic Memory Allocation After Initialization
Original Intent: Prevent memory exhaustion and fragmentation in systems without virtual memory.
C#/.NET Applicability: Fundamentally incompatible with .NET’s design philosophy.
Why This Rule Doesn’t Translate
The .NET garbage collector exists precisely to enable safe dynamic allocation. Forbidding dynamic allocation in .NET is like forbidding breathing—every framework feature depends on it:
// Standard .NET patterns rely on GC
var results = await httpClient.GetStringAsync(url);
var parsed = JsonSerializer.Deserialize<MyData>(results);
var filtered = parsed.Items.Where(x => x.IsActive).ToList();
The Modern Equivalent: Minimize Allocations
Rather than avoiding allocation entirely, focus on reducing unnecessary allocations and GC pressure:
// Inefficient: allocates multiple intermediate strings
string result = "";
for (int i = 0; i < 1000; i++)
{
result += $"Item {i}; "; // Allocates new string each iteration
}
// Efficient: single allocation, reusable buffer
var sb = new StringBuilder(capacity: 1000 * 15);
for (int i = 0; i < 1000; i++)
{
sb.Append("Item ");
sb.Append(i);
sb.Append("; ");
}
string result = sb.ToString();
Stack Allocation for Performance-Critical Code
When allocation overhead matters, use Span<T> with stackalloc:
// Stack allocation for small, temporary buffers
Span<int> buffer = stackalloc int[256];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = ComputeValue(i);
}
ProcessData(buffer);
Critical constraints: Stack allocations come with three important limitations. First, analyzer rule CA2014 prevents using stackalloc inside loops to avoid stack overflow. Second, keep allocations small (under 1KB recommended) since stack space is limited. Third, Span<T> can’t escape method scope: attempting to return it produces compiler error CS8347. For data that needs to escape, use Memory<T> or arrays instead.
Object Pooling for Hot Paths
For high-throughput scenarios:
// Use ArrayPool for reusable buffers
byte[] buffer = ArrayPool<byte>.Shared.Rent(4096);
try
{
int bytesRead = await stream.ReadAsync(buffer.AsMemory(0, 4096));
ProcessBytes(buffer.AsSpan(0, bytesRead));
}
finally
{
ArrayPool<byte>.Shared.Return(buffer);
}
Verdict: Don’t avoid allocation, manage it intelligently. Use Span<T>/Memory<T> for performance-critical code, ArrayPool<T> for reusable buffers, and profile before optimizing. I’ve watched teams waste months optimizing allocations that consumed 2% of execution time while ignoring the database query running on every keystroke. The GC is your friend, not your enemy—but measure before you fight it.
Rule 4: No Function Longer Than One Printed Page (~60 Lines)
Original Intent: Keep functions digestible for human review and static analysis.
C#/.NET Applicability: Absolutely valid, possibly even more important.
Why This Rule Still Matters
In 15 years across finance, healthcare, and industrial control systems, I’ve never once regretted breaking a long method into smaller pieces. I’ve regretted plenty of 500-line monsters—particularly the one in a reporting visualization that took three developers two weeks to debug because nobody could hold the entire state machine in their head:
// Bad: 250-line method doing everything
public async Task<OrderResult> ProcessOrder(OrderRequest request)
{
// Validate customer
// Check inventory
// Calculate pricing
// Apply discounts
// Process payment
// Update inventory
// Send notifications
// Log everything
// Handle 12 different error cases
// ... 200 more lines
}
Modern C# actually makes this rule easier to enforce:
// Good: orchestration method with clear intent
public async Task<OrderResult> ProcessOrder(OrderRequest request)
{
var customer = await ValidateCustomer(request.CustomerId);
var availability = await CheckInventory(request.Items);
if (!availability.AllAvailable)
return OrderResult.OutOfStock(availability.UnavailableItems);
var pricing = CalculatePricing(request.Items, customer.Tier);
var payment = await ProcessPayment(customer, pricing.Total);
if (!payment.Succeeded)
return OrderResult.PaymentFailed(payment.Reason);
await UpdateInventory(request.Items);
await SendConfirmation(customer, request);
return OrderResult.Success(payment.TransactionId);
}
Modern Enforcement Tools
Unlike 2006 C, modern tooling enforces this automatically. The CA1505 analyzer rule flags unmaintainable code based on cyclomatic complexity and maintainability index. Configure it in .editorconfig to treat violations as warnings, ensuring your team maintains digestible function sizes without manual review.
Local functions reduce the need for small private methods:
public IEnumerable<Order> GetRecentOrders(Customer customer)
{
var cutoffDate = DateTime.UtcNow.AddDays(-30);
return customer.Orders
.Where(IsRecent)
.OrderByDescending(o => o.Date);
bool IsRecent(Order order) => order.Date >= cutoffDate;
}
Verdict: If anything, aim for shorter than 60 lines. With expression-bodied members, local functions, and LINQ, there’s no excuse for bloated methods. Enable analyzers to enforce it.
Rule 5: Assertion Density of Two Per Function
Original Intent: Catch anomalous conditions early with runtime checks.
C#/.NET Applicability: Valid principle, but modern C# offers better mechanisms.
The Modern Equivalent: Multiple Layers of Defense
NASA wanted two runtime assertions per function. C# gives you something better—multiple enforcement layers, starting at compile time:
1. Compile-Time Checks: Nullable Reference Types
#nullable enable
public class OrderProcessor
{
// Compiler enforces non-null at compile time
public OrderResult Process(Order order, Customer customer)
{
// No runtime null check needed - compiler guarantees non-null
var total = order.Items.Sum(i => i.Price);
// Nullable warns if customer.Email might be null
SendReceipt(customer.Email, total);
return OrderResult.Success;
}
private void SendReceipt(string email, decimal amount)
{
// email is guaranteed non-null by caller contract
}
}
2. Parameter Validation: Guard Clauses
public void ProcessPayment(decimal amount, PaymentMethod method)
{
ArgumentNullException.ThrowIfNull(method); // .NET 6+
ArgumentOutOfRangeException.ThrowIfNegativeOrZero(amount);
// Business logic here
}
// Pre-.NET 6 equivalent
public void ProcessPayment(decimal amount, PaymentMethod method)
{
if (method is null)
throw new ArgumentNullException(nameof(method));
if (amount <= 0)
throw new ArgumentOutOfRangeException(nameof(amount), "Amount must be positive");
// Business logic here
}
3. Debug Assertions for Internal Invariants
using System.Diagnostics;
public void UpdateBalance(Account account, decimal delta)
{
Debug.Assert(account != null, "Account should never be null here");
Debug.Assert(!account.IsClosed, "Should not update closed accounts");
account.Balance += delta;
Debug.Assert(account.Balance >= 0, "Balance should never go negative");
}
Key difference: Debug.Assert is removed in Release builds, making it suitable for checking invariants during development without runtime cost. For mission-critical code, consider Code Contracts which provide formal preconditions, postconditions, and object invariants with static verification support.
Pattern Matching for Exhaustiveness
Modern C# can enforce completeness at compile time:
public string GetStatusMessage(OrderStatus status)
{
return status switch
{
OrderStatus.Pending => "Order is pending",
OrderStatus.Processing => "Order is being processed",
OrderStatus.Shipped => "Order has been shipped",
OrderStatus.Delivered => "Order delivered",
OrderStatus.Cancelled => "Order cancelled",
// Compiler error if any enum value is missing (with warnings enabled)
}
}
Verdict: Use nullable reference types for compile-time null safety, guard clauses for parameter validation, and Debug.Assert for invariants. This gives you better than “two assertions per function”: you get enforcement at compile time where possible, and runtime checks where needed.
Rule 6: Declare Data at Smallest Possible Scope
Original Intent: Minimize variable lifetime and potential misuse.
C#/.NET Applicability: Completely valid and reinforced by modern language features.
Modern C# Makes This Easier
// Bad: variables declared too early
public void ProcessData(int[] data)
{
int sum = 0;
int count = 0;
int average = 0;
int max = 0;
// 50 lines later...
for (int i = 0; i < data.Length; i++)
{
sum += data[i];
count++;
}
average = sum / count;
// Another 30 lines...
max = data.Max();
}
// Good: declare at point of use
public void ProcessData(int[] data)
{
// ... other logic ...
int sum = 0;
foreach (var value in data)
{
sum += value;
}
int average = sum / data.Length;
// ... other logic ...
int max = data.Max();
}
Pattern Matching Limits Scope Automatically
// Scope limited to when pattern matches
if (obj is Customer { IsActive: true } customer)
{
// 'customer' only exists in this block
ProcessCustomer(customer);
}
// 'customer' doesn't exist here
// Switch expressions with declaration patterns
var discount = order switch
{
{ Total: > 1000 } largeOrder => largeOrder.Total * 0.1m,
{ Customer.IsPremium: true } premiumOrder => premiumOrder.Total * 0.05m,
_ => 0
};
Using Declarations for Resource Management
// Bad: resource scope too broad
public async Task<string> ReadConfig()
{
var stream = File.OpenRead("config.json");
try
{
// 100 lines of code
return await ReadFromStream(stream);
}
finally
{
stream.Dispose();
}
}
// Good: using declaration limits scope
public async Task<string> ReadConfig()
{
using (var stream = File.OpenRead("config.json"))
{
return await ReadFromStream(stream);
}
// stream disposed here, not at method end
}
// Even better: compact using declaration (C# 8+)
public async Task<string> ReadConfig()
{
using var stream = File.OpenRead("config.json");
return await ReadFromStream(stream);
// stream disposed at end of method automatically
}
Verdict: More relevant than ever. Pattern matching limits scope automatically. Using declarations prevent resource leaks. Block-scoped variables can’t escape their intended lifetime. C# 10’s file-scoped namespaces extend this principle to namespace declarations—one less level of indentation, one less place for variables to hide.
Rule 7: Check Return Values and Parameter Validity
Original Intent: Never ignore potential failures; validate all inputs.
C#/.NET Applicability: Absolutely critical, but mechanisms differ.
Return Value Checking: Exceptions vs. Result Types
.NET uses exceptions for error conditions, unlike C’s return codes:
// .NET idiomatic: exception-based
public void SaveCustomer(Customer customer)
{
ArgumentNullException.ThrowIfNull(customer);
try
{
database.SaveChanges(); // Throws on error
}
catch (DbUpdateException ex)
{
logger.LogError(ex, "Failed to save customer {CustomerId}", customer.Id);
throw; // Re-throw or wrap in domain exception
}
}
But C#’s Try* pattern provides explicit success/failure:
// When failure is expected and not exceptional
if (!int.TryParse(input, out int value))
{
Console.WriteLine("Invalid number format");
return;
}
if (!inventory.TryReserve(productId, quantity, out var reservation))
{
return OrderResult.InsufficientStock;
}
Modern Pattern: Result Types
When failure is a normal business outcome rather than an exceptional condition, exceptions feel wrong. Result types make success and failure explicit:
public record Result<T>
{
public bool Success { get; init; }
public T? Value { get; init; }
public string? Error { get; init; }
public static Result<T> Ok(T value) => new() { Success = true, Value = value };
public static Result<T> Fail(string error) => new() { Success = false, Error = error };
}
public async Task<Result<Order>> PlaceOrder(OrderRequest request)
{
var validationResult = ValidateRequest(request);
if (!validationResult.Success)
return Result<Order>.Fail(validationResult.Error!);
var paymentResult = await ProcessPayment(request);
if (!paymentResult.Success)
return Result<Order>.Fail(paymentResult.Error!);
return Result<Order>.Ok(CreateOrder(request, paymentResult.Value!));
}
Parameter Validation: Multiple Strategies
public class OrderService
{
// 1. Constructor validation
public OrderService(IOrderRepository repository, ILogger logger)
{
ArgumentNullException.ThrowIfNull(repository);
ArgumentNullException.ThrowIfNull(logger);
_repository = repository;
_logger = logger;
}
// 2. Method parameter validation
public Order CreateOrder(Customer customer, List<OrderItem> items)
{
ArgumentNullException.ThrowIfNull(customer);
ArgumentNullException.ThrowIfNull(items);
if (items.Count == 0)
throw new ArgumentException("Order must contain at least one item", nameof(items));
if (items.Any(i => i.Quantity <= 0))
throw new ArgumentException("All items must have positive quantity", nameof(items));
// Business logic
}
}
Analyzer Support
Roslyn analyzers automatically detect unchecked return values. Enable CA1806 to catch ignored method results and IDE0058 to flag unused expression values. These analyzers ensure your team never silently discards important return information.
Verdict: Always validate parameters. Always handle exceptions or check return values. Use ArgumentNullException.ThrowIfNull() for parameter checks, Try* methods for expected failures, and exceptions for exceptional conditions.
Rule 8: Limited Preprocessor Use
Original Intent: Avoid complex macros that obscure code meaning and hinder analysis.
C#/.NET Applicability: Largely irrelevant: C# preprocessor is far more constrained.
What C# Doesn’t Have (Thank Goodness)
C# preprocessor directives can’t:
- Define function-like macros
- Perform token pasting
- Create recursive macros
- Hide complex logic
// This is NOT possible in C#:
// #define MAX(a, b) ((a) > (b) ? (a) : (b)) // No function-like macros
// What C# DOES support:
#define DEBUG_LOGGING
#if DEBUG_LOGGING
Console.WriteLine("Debug information");
#endif
The Real Concern: Conditional Compilation Abuse
// Bad: excessive conditional compilation
public class ConfigService
{
public string GetConnectionString()
{
#if PRODUCTION
return "Production connection string";
#elif STAGING
return "Staging connection string";
#elif DEVELOPMENT
return "Development connection string";
#else
return "Local connection string";
#endif
}
}
// Good: configuration-based approach
public class ConfigService
{
private readonly IConfiguration _config;
public ConfigService(IConfiguration config)
{
_config = config;
}
public string GetConnectionString()
{
return _config.GetConnectionString("DefaultConnection");
}
}
Legitimate Uses
// Acceptable: Debug-only code
public void ProcessData(int[] data)
{
#if DEBUG
Debug.Assert(data != null);
Debug.Assert(data.Length > 0);
#endif
// Production code
int sum = 0;
foreach (var value in data)
{
sum += value;
}
}
// Acceptable: Platform-specific code
#if WINDOWS
[DllImport("user32.dll")]
private static extern bool ShowWindow(IntPtr hWnd, int nCmdShow);
#elif LINUX
[DllImport("libX11.so")]
private static extern int XOpenDisplay(string display_name);
#endif
Verdict: C#’s preprocessor is already minimal by design. Use it for debug-only code and platform-specific implementations. For everything else, use dependency injection and configuration.
Rule 9: Restrict Pointer Use (Max One Level of Dereferencing)
Original Intent: Prevent complex pointer arithmetic and double-pointer confusion.
C#/.NET Applicability: Mostly irrelevant: managed code eliminates most pointer usage.
The .NET Alternative: References Are Not Pointers
In managed C#, you work with references, not pointers:
// Safe by default - no pointer arithmetic
public class Customer
{
public string Name { get; set; }
}
// Reference semantics, but no pointer manipulation
Customer customer = new Customer { Name = "John" };
Customer same = customer; // same reference, not a copy
same.Name = "Jane";
Console.WriteLine(customer.Name); // Outputs: Jane
Unsafe Code: When You Actually Need Pointers
Rare, but sometimes unavoidable for interop or extreme performance:
// Performance-critical unsafe code - use sparingly
unsafe void ProcessBuffer(byte[] data)
{
fixed (byte* ptr = data)
{
// One level of indirection - NASA would approve
byte* current = ptr;
for (int i = 0; i < data.Length; i++)
{
*current = (byte)(*current * 2);
current++;
}
}
}
Modern Safe Alternative: Span
// Safe, zero-allocation, pointer-like performance
void ProcessBuffer(Span<byte> data)
{
for (int i = 0; i < data.Length; i++)
{
data[i] = (byte)(data[i] * 2);
}
}
// Or even better:
void ProcessBuffer(Span<byte> data)
{
foreach (ref byte value in data)
{
value = (byte)(value * 2);
}
}
ref/in/out: Managed “Pointer-Like” Semantics
// Pass by reference without pointers
public void UpdateCustomer(ref Customer customer)
{
customer = new Customer { Name = "Updated" };
}
// Read-only reference (no copying large structs)
public decimal CalculateTotal(in LargeStruct data)
{
return data.Value1 + data.Value2; // No defensive copy
}
// Output parameter
public bool TryGetCustomer(int id, out Customer customer)
{
// ...
}
Verdict: You rarely need unsafe code in modern C#. Use Span<T>/Memory<T> for performance-critical buffer manipulation. Use ref/in/out for reference semantics. Reserve unsafe for true interop scenarios.
Rule 10: Enable All Compiler Warnings and Use Static Analysis
Original Intent: Catch bugs early through automated checking.
C#/.NET Applicability: Not just valid: vastly more powerful than 2006 C tooling.
Modern Tooling Is Incomparably Better
<!-- Essential .csproj settings -->
<PropertyGroup>
<!-- Treat all warnings as errors -->
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
<!-- Enable nullable reference types -->
<Nullable>enable</Nullable>
<!-- Enable latest code analysis -->
<AnalysisLevel>latest</AnalysisLevel>
<EnforceCodeStyleInBuild>true</EnforceCodeStyleInBuild>
<!-- Enable specific analyzer categories -->
<AnalysisMode>All</AnalysisMode>
</PropertyGroup>
Roslyn Analyzers: Static Analysis Built In
Unlike 2006 C compilers, Roslyn analyzers provide deep semantic analysis. Rules like CA2000 catch undisposed resources, CA1806 flags ignored return values, and CA1062 enforces parameter validation: all without running your code. This compile-time safety net would have been science fiction to NASA’s 2006 C developers.
EditorConfig for Team Standards
EditorConfig files let you enforce code style and quality rules across your team. Beyond basic formatting, you can control diagnostic severity for specific analyzer rules: turning CA1062 (validate arguments) into build errors while suppressing overly strict rules like CA1303 (localized strings). This ensures consistent standards without lengthy code review discussions about style preferences.
Multiple Layers of Analysis
Modern .NET provides defense in depth: compiler warnings catch syntax issues, Roslyn analyzers enforce code quality (CA* rules) and style (IDE* rules). Extend this with third-party analyzers like SecurityCodeScan for security vulnerabilities, AsyncFixer for async/await pitfalls, or Microsoft.VisualStudio.Threading.Analyzers for threading issues. For enterprise scenarios, SonarQube and GitHub Advanced Security with CodeQL provide continuous security scanning.
Automated Code Review
Integrate analysis into CI/CD pipelines with GitHub Actions, Azure Pipelines, or similar platforms. Configure builds to treat warnings as errors (/p:TreatWarningsAsErrors=true) so quality issues block merges automatically. This transforms static analysis from optional developer tooling into enforced team standards.
Verdict: This is the one rule where C# developers have it absurdly better than 2006 C developers. I’ve reviewed codebases where enabling TreatWarningsAsErrors found 47 bugs in 30 seconds—bugs that had been sitting there for months. Enable everything: compiler warnings, Roslyn analyzers, nullable reference types. Use .editorconfig to enforce team standards. Automate it in CI/CD so quality gates can’t be ignored when the deadline looms. There’s no excuse not to.
Summary: Translation Guide
| Power of Ten Rule | C#/.NET Status | Modern Equivalent |
|---|---|---|
| 1. Simple control flow | Partially valid | Avoid goto; recursion OK with care |
| 2. Bounded loops | Valid | Use foreach, LINQ Take(), CancellationToken |
| 3. No dynamic allocation | Not applicable | Minimize allocations with Span<T>, ArrayPool<T> |
| 4. Max 60 lines per function | Absolutely valid | Enable CA1505, use local functions, extract methods |
| 5. Two assertions per function | Valid principle | Nullable types + guard clauses + Debug.Assert |
| 6. Minimal variable scope | Absolutely valid | Pattern matching, using declarations, block scope |
| 7. Check all returns | Absolutely valid | Validate parameters, handle exceptions, use Try* pattern |
| 8. Limited preprocessor | Mostly irrelevant | C# preprocessor already constrained; use DI for config |
| 9. Restrict pointers | Mostly irrelevant | Use Span<T>, ref/in/out; reserve unsafe for interop |
| 10. All warnings + static analysis | Emphatically valid | Enable all analyzers, nullable types, treat warnings as errors |
The Verdict: Timeless Principles, Modern Implementation
Four rules (4, 6, 7, and 10) transfer directly with superior tooling. Function length limits, minimal scope, return value checking, static analysis—all work better in 2025 than 2006 C. Three rules (3, 8, 9) become largely irrelevant because .NET’s managed runtime provides better abstractions than manual memory management or preprocessor macros. The remaining rules (1, 2, 5) require contextual interpretation. Their principles remain sound, but modern C#’s language features and runtime safety nets fundamentally change implementation.
Gerard Holzmann’s rules weren’t about C syntax. They encoded deeper principles—predictability, analyzability, defensive programming—that transcend any specific language. What’s different in 2025? Modern C# gives you powerful tools to enforce these principles without bare-metal constraints. Roslyn analyzers catch bugs at compile time that required runtime assertions in embedded C. Nullable reference types prevent null dereferences before code runs. Span<T> delivers pointer-like performance with array-like safety.
For safety-critical C# code—medical devices, financial systems, industrial control—absolutely adopt these principles. Enable every analyzer. Enforce short functions. Validate everything. Treat warnings as errors. But don’t cargo-cult embedded C constraints just because NASA used them. Embrace the GC when it makes sense, use modern abstractions that eliminate entire bug categories, and let the type system catch errors at compile time instead of in production. Your code will be safer, more maintainable, and more correct. Which is what Gerard Holzmann wanted all along.
