10 senior-level .NET performance questions with clear answers and real-time analogies
https://www.youtube.com/@DotNetFullstackDev
1. How does async / await improve scalability, and when can it hurt performance?
Answer
async / await doesn’t make CPU work faster.
It frees the thread while waiting for I/O (DB, HTTP, file).
For I/O-bound work:
awaitallows the thread to serve other requests instead of blocking.For CPU-bound work:
asyncadds overhead (state machine, captures) without benefit. UseTask.Runcarefully or stay synchronous.
Bad patterns:
Task.Result/.Wait()→ risk of deadlocks + blocks the threadOver-awaiting tiny operations (e.g.,
await Task.CompletedTask;) in hot pathsMaking everything async all the way except at the top (UI / ASP.NET entry) and then blocking.
// Good: I/O-bound
public async Task<User> GetUserAsync(int id)
{
return await _dbContext.Users.FindAsync(id);
}
// Bad: CPU-bound wrapped as async for no reason
public async Task<int> AddAsync(int a, int b)
{
return a + b; // async adds overhead here
}
Real-time analogy
Think of a (waiter in a restaurant):
Blocking (sync): waiter stands near the kitchen until food is ready, serving only one table.
Async: waiter takes the order, hands it to the kitchen, then goes to other tables while food is being cooked.
If you make the waiter “async” for pouring water (instant action), you just add paperwork with no benefit.
2. Explain .NET Garbage Collector generations and how they impact performance.
Answer
.NET GC has 3 generations:
Gen 0: very short-lived objects (most allocations)
Gen 1: survivors of Gen 0
Gen 2: long-lived objects (caches, singletons, big graphs)
GC is optimized for the fact that most objects die young.
So Gen 0 collections are frequent but very cheap.
Gen 2 collections are expensive because they scan big heaps.
Performance tips:
Avoid unnecessary allocations in hot paths (boxing,
newinside loops).Reuse buffers (
ArrayPool<T>,StringBuilder) for repetitive tasks.Be careful with large object heap (LOH) allocations (
> 85k); they are costly and compacted less often.
// Allocation-heavy: each iteration allocates a new string
for (int i = 0; i < 100000; i++)
{
log += i.ToString();
}
// Better: reuse StringBuilder
var sb = new StringBuilder();
for (int i = 0; i < 100000; i++)
{
sb.Append(i);
}
var log = sb.ToString();
Real-time analogy
Think of cleaning a house:
Gen 0 = picking up wrappers from the living room floor every hour. Easy.
Gen 2 = doing a full house deep clean. Rare but exhausting.
If you keep throwing large furniture into the house (big objects), deep cleaning becomes painful and slow.
3. When would you choose struct over class, and how can that help or hurt performance?
Answer
struct is a value type:
Stored inline (stack or inside containing object)
Copied by value
No GC overhead for separate heap object (for small structs)
Use struct when:
It’s small (e.g., up to 16–32 bytes)
Immutable
Represents a value (e.g.,
Point,DateTime,Guid)Frequently created in hot paths
Avoid:
Large structs → copying is expensive
Mutable structs → confusing semantics
Boxing (when struct is treated as
objector interface) → extra allocation
public struct Money // good: small, immutable value
{
public decimal Amount { get; }
public string Currency { get; }
public Money(decimal amount, string currency)
{
Amount = amount;
Currency = currency;
}
}
Real-time analogy
Think of car keys:
As a
struct: keys are small; you carry a physical copy in your pocket—lightweight, no indirection.As a
class: you put keys in a separate locker and always fetch them using a token—extra trip each time.
For small things, keeping them in your pocket (value type) is faster than putting each in a separate locker (heap object).
4. How do Span<T> and Memory<T> improve performance in .NET?
Answer
Span<T>:
A ref struct representing a contiguous region of memory (stack, heap, native).
Allows slicing arrays, strings, and buffers without extra allocation.
Can only live on the stack, so it cannot be stored in fields or used in async methods.
Memory<T>:
Similar to
Span<T>but heap-allocatable.Works with async / interfaces.
Has
.Spanproperty to get aSpan<T>for fast operations.
Benefits:
Avoids copying when slicing or parsing data.
Great for parsing protocols, building serializers, working with large buffers.
ReadOnlySpan<char> nameSpan = “John Doe”.AsSpan();
var firstName = nameSpan[..4]; // “John” – no new string created
public void ProcessSpan(ReadOnlySpan<byte> data)
{
// operate directly over data without allocations
}
Real-time analogy
Imagine reading only one chapter of a book:
Without Span: you photocopy the chapter into a new booklet, then read it → extra paper and time.
With Span: you simply place a bookmark over the chapter and read directly from the original book.
Span is the bookmark—no new copies, just a different view of the same memory.
5. LINQ vs for loop: which is faster and why?
Answer
LINQ is highly optimized, but not free:
It creates iterators, delegates, and sometimes allocations.
Chaining multiple LINQ operators builds multiple iterator objects.
In non-critical paths, the readability tradeoff is worth it.
For hot loops (millions of iterations, low-latency code):
A plain
for/foreachloop is typically faster and allocation-free.Use LINQ where clarity > micro-performance.
// LINQ
var sum = numbers.Where(n => n % 2 == 0).Sum();
// Manual loop (faster in tight loops)
int sum = 0;
foreach (var n in numbers)
{
if (n % 2 == 0)
sum += n;
}
Real-time analogy
LINQ is like using a high-level Excel formula.
Super readable, fewer lines of “manual work”.
But Excel does some hidden steps internally.
A for loop is like writing calculations directly on paper: more verbose but no framework overhead.
6. How do you improve string handling performance in .NET?
Answer
Key points:
stringis immutable – every modification (concat, replace) creates a new string.For repeated concatenation → use
StringBuilder.Avoid
string.ToLower()/ToUpper()in hot paths when checking equality; useStringComparisoninstead.For parsing/substring-heavy operations → consider
ReadOnlySpan<char>andstring.Create.
// Better comparison
if (string.Equals(input, “admin”, StringComparison.OrdinalIgnoreCase))
{
// ...
}
// Good for repetitive concatenation
var sb = new StringBuilder();
foreach (var item in items)
{
sb.Append(item.Name).Append(”,”);
}
string result = sb.ToString();
Real-time analogy
Think of editing a printed document:
Without StringBuilder: every time you change a sentence, you reprint the entire page.
With StringBuilder: you work in a text editor, make many modifications in memory, and print once.
Too many “reprints” (new strings) slow your system down.
7. How do you diagnose and fix performance issues in a .NET application?
Answer
Typical senior-level workflow:
Measure first – never optimize blindly.
Use
dotnet-trace,dotnet-counters, Visual Studio Profiler, PerfView, or Application Insights.
Identify hotspots – CPU, memory allocations, I/O waits, locks.
Classify issue:
CPU-bound → algorithms, loops, serialization, JSON parsing.
I/O-bound → DB calls, HTTP, file I/O.
GC/alloc heavy → excessive
new, boxing, large objects, short-lived strings.
Fix by priority – top 10% hotspots usually give 90% of gain.
Re-measure – confirm improvement & ensure no regressions.
For micro-benchmarks → use BenchmarkDotNet.
[MemoryDiagnoser]
public class ParsingBenchmarks
{
private readonly string _data = “1,2,3,4,5”;
[Benchmark]
public int ParseWithSplit()
{
return _data.Split(’,’).Select(int.Parse).Sum();
}
[Benchmark]
public int ParseWithSpan()
{
ReadOnlySpan<char> span = _data.AsSpan();
int sum = 0;
int start = 0;
for (int i = 0; i <= span.Length; i++)
{
if (i == span.Length || span[i] == ‘,’)
{
sum += int.Parse(span[start..i]);
start = i + 1;
}
}
return sum;
}
}
Real-time analogy
You don’t upgrade all machines in a factory randomly.
First, you walk the floor with a stopwatch, find the slowest station (bottleneck), fix that, and re-check throughput.
Profilers are your factory stopwatch.
8. What are common Entity Framework Core performance pitfalls and how do you handle them?
Answer
Common pitfalls:
N+1 queries – loading a collection of entities and then lazy-loading related data per item.
Loading more columns/rows than needed (
ToList()without filters).Enabling tracking when only reading data.
Using heavy LINQ projections that EF can’t translate efficiently.
Mitigations:
Use
Includewisely or better: explicit projections (Select) to DTOs.Use
.AsNoTracking()for read-only queries.Cache compiled queries for frequently repeated queries.
// Bad: loads entire entity graph, tracking on
var users = await _context.Users.ToListAsync();
// Better: read-only projection, no tracking
var users = await _context.Users
.AsNoTracking()
.Select(u => new UserDto
{
Id = u.Id,
Name = u.Name,
Email = u.Email
})
.ToListAsync();
Real-time analogy
Imagine a waiter bringing the entire restaurant menu, kitchen inventory, and chef’s notes to your table when you ask for the “Today’s special”.
EF can do the same if you don’t restrict what you need.
Ask specifically for the dish you want (projection), not the entire restaurant.
9. Explain caching strategies in .NET for performance improvement.
Answer
Caching reduces repeated expensive operations:
In-memory cache (
IMemoryCache) – per app instance; great for small data, configuration, lookups.Distributed cache (
IDistributedCache) – Redis/SQL; shared across instances, good for web farms.Output caching – cache rendered results of controllers/pages for anonymous or semi-static responses.
Important aspects:
Choose expiration policies (absolute vs sliding).
Consider cache stampede (many requests miss at once) → use locks or “cache aside” logic.
Cache only expensive, read-heavy, rarely changing data.
public async Task<Product> GetProductAsync(int id)
{
return await _cache.GetOrCreateAsync($”product:{id}”, async entry =>
{
entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5);
return await _dbContext.Products.FindAsync(id);
});
}
Real-time analogy
Think of keeping frequently used spices near the stove:
Without cache: every time you cook, you walk to the storage room to fetch salt, chili, etc.
With cache: you keep them handy on the countertop.
You still go to the storage room occasionally (DB), but not for every dish.
10. How do you design APIs for high throughput and low latency in ASP.NET Core?
Answer
Key principles:
Use async all the way for I/O-bound work (DB, HTTP, file).
Avoid blocking calls (
.Result,.Wait), as they consume threads from the thread pool.Minimize per-request allocations – avoid big objects, large temp lists, heavy logging string concatenations.
Use dependency injection with appropriate lifetimes (e.g.,
DbContextas scoped, heavy services as singleton).Enable response compression and HTTP/2+ where applicable.
Use minimal APIs or tuned controllers for high-performance scenarios.
Keep middleware pipeline slim; avoid costly work in middleware.
app.MapGet(”/users/{id:int}”, async (int id, AppDbContext db) =>
{
var user = await db.Users
.AsNoTracking()
.Select(u => new UserDto { Id = u.Id, Name = u.Name })
.FirstOrDefaultAsync(u => u.Id == id);
return user is null ? Results.NotFound() : Results.Ok(user);
});
Real-time analogy
Imagine running a toll plaza:
Each lane (thread) should move cars quickly.
If a lane officer starts doing paperwork (blocking I/O) while a car is in front, the entire queue stalls.
You want quick swipes (async I/O), short conversations (small payload), and as few checkpoints (middlewares) as possible.
Design your API like a fast toll plaza, not a bureaucratic office.



