Hands-on architect in the .NET space with 26 years of experience on an everlasting quest for knowledge to build the right software the right way at the right time
The Hague, Netherlands
Dennis works for Aviva Solutions, is a Microsoft MVP and a veteran hands-on architect in the .NET space with a special interest in writing clean code, Domain Driven Design, Event Sourcing and everything agile. He specializes in designing enterprise solutions based on the .NET technologies as well as providing coaching on all aspects of designing, building and maintaining enterprise systems. He is the author of https://www.fluentassertions.com, a very popular .NET assertion framework, https://www.liquidprojections.net, a set of libraries for building Event Sourcing architectures and he has been maintaining coding guidelines for C# on https://www.csharpcodingguidelines.com since 2001. He also keeps a blog on his everlasting quest for better solutions at https://www.continuousimprover.com. You can reach him on Twitter through https://twitter.com/ddoomen and on Mastodon through https://mastodon.social/@ddoomen.
Area of Expertise
About 15 years ago, I got inspired by the practice of Test Driven Development. Now, with many years of great and not-so-great experiences practicing Test Driven Development, I thought it is the time to capture my own “laws”. The term "law" is obviously an exaggeration and "principles or heuristics" cover my intend much better. Either way, I want to talk about the scope of testing, using the observable behavior, naming conventions, state-based vs interaction-based testing, the impact of DRY, patterns to build useful test objects and some of the libraries and tools I use. In short, everything I try to follow every day I write, review or maintain code.
Having designed, build and run systems that are deployed on oil platforms all over the world and that use Event Sourcing as their underlying architecture means that I've experienced all the greatness and the pains this architecture style has to offer. In this talk, I'm going to share you my experiences with over 10 years of practicing Event Sourcing in multiple systems. I'll talk about the reasons we adopted it in the first place, the challenges we had introducing this to the developers, the problems we ran into in production, and the lack of confidence we got from management. I'll conclude this talk with my current thoughts on Event Sourcing and how I would implement it when I would have to do it again.
As long as I’ve been developing with Visual Studio, I remember I’ve been looking for add-ons to make me more productive. In 2004, I discovered JetBrains’ ReSharper. It improved Intellisense, the coloring of identifiers, the many built-in refactorings, it felt like a new world opened up to me. But I also noticed that with every new Visual Studio and ReSharper release the memory and CPU footprint increased a lot. In 2016, JetBrains announced Rider, a full-blown IDE for .NET and C# developers based on there wildly successful IntelliJ IDE and all the power of ReSharper. By the end of 2017, I fully switched to Rider and I never looked back.
In the mean time, I regularly started to run into people that switched back to a bare-bones Visual Studio installation and told me that they didn’t miss anything. So I started to wonder. Did Visual Studio really get so much better since the last time I tried? Did Microsoft finally get the message and pick up the pace in building a productive IDE? To see how much Microsoft has evolved, I decided to try it myself. This is what I learned.
After more than 10 years of development, our pet project, Fluent Assertions has almost reached the 250 million downloads. Providing a high quality library like that doesn't come for free. We've been trying to write code that is clean enough for our contributors, write tests that are self-explanatory, ensure breaking changes are strictly controlled and try to make it easy to use.
In this talk, I'd like to share the tools and techniques we have been using, how they've enriched our day jobs, and how they may do that for you too.
I'll talk about the release strategy, documentation, versioning, naming conventions, code structure, the build pipeline, automated testing, code coverage, API change detection, multi-targeting and more.
Covers: GitVersion, GitFlow, semver, Chill, test naming conventions, Nuke, GHA, Coverlet, Approval Tests, GH release notes, Jekyll/Minimal Mistakes, multi-targeting, C# 10, Rider, Jetbrains Annotations, editorconfig/Stylecop, TDD, design guidelines, decision logs, mutation testing
I'm sure every developer wants to be able to change code with confidence and without fear. Readable and self-explanatory code is one aspect of that. Too much coupling is another major source of problems that prevent you from changing one part of the system without causing side-effects. In this talk, I'd like you to show you common sources of unnecessary coupling and offer you options to help prevent and/or break those. I'll talk about how principles like Don't Repeat Yourself and Dependency Injection can be a double-edge sword, how to detect too unnecessary dependencies and how to use the Dependency Inversion Principle to unravel some of those. And yes, I will also talk about controlling dependencies on the package level.
@law of demeter
@Dealing with ugly dependencies by applying DIP and an adapter
@SQ rule to detect too many dependencies
@Dependency management at the package level
@Dependency injection is great, but try to avoid a global container
@ASP.NET core at the module level
Plenty of great content has been written about the problems Event Sourcing can help you solve. But after almost 10 years of using Event Sourcing in a distributed occasionally connected environment, I've experienced a lot of real-world challenges. In this talk, I'm going to talk about common challenges I've run into and the options I use to address them. I'll cover aggregate boundary challenges, problems finding the invariants, cross-domain communication, value types and events, versioning of events, eventual consistency of projections, projection bugs, time to rebuild projections, supporting blue/green deployments and more.
A couple of weeks ago I ended up in a technical debate on how to take an existing Event Sourced application further to fully reap the benefits it is designed to give you. I’ve written many posts about the pitfalls, the best practices and how to implement this in .NET specifically. But after 10 years of Event Sourcing, I still think it is useful to provide you with a list of the most important guidelines and heuristics that I think are needed to be successful with Event Sourcing. I'll talk about aggregate boundaries, appropriately-sized events, cross-domain communication, optimizing projection speed, aggregating data and more.
I regularly end up in a discussion on whether Event Sourcing is the right architecture style or not. As the universal answer to this question tends to be “it depends”, I started thinking about the typical problems where I would use Event Sourcing. With about 10 years of experience building and running Event Sourced systems, I came up with 10 examples of where Event Sourcing shines, some more functional and some more technical.
Whether you're just writing your first unit test or have been practicing Test Driven Development for many years, I'm sure your biggest challenge is to find the right scope for testing. Unfortunately a lot of books and guidance seem to imply that every class or type should be tested separately. Well, I can tell you from 15 years of unit testing experience that that's the best way to shoot yourself in the foot.
In this session, I'm going to show you some concrete examples and why I chose a certain boundary for the "unit" in unit testing. I will also share design heuristics I use myself to find those boundaries and why those are not guidelines. I'll also illustrate how principals like DRY and SOLID can be both a blessing and a curse that may lead to the wrong design choices.
It gives the audience food for thought on how the internal boundaries of your architecture can influence the scope of automated testing, how different layers of testing are complimentary, and why the "class-as-a-unit" is often wrong.
I've had a lot of prior build automation experience with the XML hell of MSBuild, the PowerShell sizzle of PSake and the "feels like C# but doesn't quite act like it is" Cake approach, both in my open-source projects as well as in professional projects. But nothing has made it such a sweet and smooth experience as Nuke did. The idea of using C# and .NET Core for build scripts that Cake introduced was a great idea, but because it didn't adopt C#/.NET fully, it became a painful exercise of plain text editing. I've been using Nuke for a while and it solved all of those concerns in a very well designed way. To show you why I think this is true, let me show you why Nuke is such a blessing for those that treat their build scripts as first class citizens of their code base.
Some people need time to think before they act. Others just want to get things done. People are different and you need to accept that. But you can learn to deal with that. Recently, I attended a workshop and collected a lot of old and new ideas for improving the way you collaborate with your fellow developers. Some may be open doors you forgot, others may be new models for understanding communication styles. Regardless, I think anybody will benefit from being a better communicator, whether you’re a developer, an architect or a manager.
Concepts like the Dependency Inversion Principle, Inversion of Control containers and Dependency Injection libraries are often mixed up, but they are not the same. You shouldn’t use a DI library until you understand the problems your solving, nor is it always the best solution.
Since I’ve felt this pain many times myself, I’d like to use this session to create some clarity. First, I’ll demonstrate the reason why people typically use libraries like Unity and Autofac in their code bases. Then I’ll show you that those same people often use the Dependency Inversion Principle in the wrong way (and need to revert to mocking libraries to compensate). Then I’ll show you how to apply the principles properly by carefully designing the seams in your architecture. And finally, I’ll show you how an advanced library can connect the dots in a very elegant way.
After having been in charge of building a large distributed enterprise-ready web-based system for over seven years, it is time to reflect on that period and share some of the most important lessons I've learned on software architecture and agile software development within the .NET realm. Could I have avoided the monolith in the first place? And what about domain modeling, technical design principles and design patterns? Do they still have merrits? And don't forget .NET coding practices, testing practices, data management and deployment techniques. In other words, what do I think I should do again or differently the next time I need to design a complicated high-performance system.
I don't have to tell anybody how awesome Event Sourcing is as an architecture style. But those that have been using it in production must have experienced the pains of keeping the upgrades fast enough, especially if you use a more traditional relational database like SQL Server. I've once heard somebody say: "If it's slow, make it faster", but rebuilding a big projection is simply a very expensive operations and involves a lot of network traffic between the application and the database servers.
Over the years since we adopted Event Sourcing, we've been experimenting a lot and have implemented various improvements to make those reprojections faster. And we haven't stopped. We're already working on some new ideas lately. So in this session, I'd like to share those techiques, their pros and cons and how we implemented them on a more detailed level.
Who remembers the days were you had to manually setup a physical server somewhere in a 19 inch rack? Nowadays those physical racks have been replaced by virtual machines provided by Google, Microsoft and Amazon. But some people are still provisioning them manually. The more mature organizations will script most of the provisioning using the native CLI or HTTP API provided by the specific cloud platform, but quite often those are maintained separately from the code of the system that is being deployed. Terraform by Hashicorp is another attempt to support infrastructure-as-code by using a declarative syntex to define the desired infrastructure. But then again, Yaml isn't really suited for anything but simple single-file configuration settings. Yaml isn't code and doesn't provide real Intellisense, line-by-line debugging and most importantly, refactoring.
But what if you could use C# and .NET to provision your infrastructure and treat that code as first-class citizens of your codebase, including all the capabilities that you would expect? Well, let me show you how Pulumi for .NET will allow you to evolve your infrastructure code with the rest of the code base without turning it in a big spaghetti of Yaml files.
It's 2022, so practices like Test Driven Development (TDD) have long found their way into organizations. However, the practices and principles to keep those unit tests maintainable haven't really changed. Still, I regularly talk to developers that somehow ended up with an incomprehensible unmaintainable set of unit tests that they once believed in, but are now holding them back. So in this session I like to provide a refresh of those fundamental ideas and talk about why you should practice TDD, revealing intentions, naming conventions, state versus interaction based testing, and how to stay out of the debugger hell with proper mocking and assertion frameworks. And even if you believe you are well versed in the testing realm, join me anyway.
I recently was asked what I would do differently if I had the experience I have now. This got me thinking about the mistakes I've made, the traps I fell for and the dogmatism I became part of over the last 27 years of professional software developer in the Microsoft realm. In this talk, I'll be going down memory lane with you and share everything I've learned on architecture, tools, technologies, agile software development, maintainable code, automated testing, documentation and anything else that I think you should know. Prepare yourself for a full hour of stories, both good and bad.
Over the years I've spoken many times about what Event Sourcing is and shared many of the good, the bad and the ugly parts of it in blog posts and various talks. However, I've never talked about how to actually build a system based on this architecture style. I keep getting the same questions over and over again. Like when to apply Event Sourcing and at what architectural level. How to deal with transactional boundaries within and outside the domain. How to build projections that are autonomous, reliable and self-supporting. How to deal with upgrades and blue-green deployments. But also on how to handle bugs, design mistakes and crashing projections. Having made a lot of these mistakes myself over these years, it's time to share my current thoughts and opinions about this. Since the .NET space has a pretty rich set of open-source projections to support this, the examples and code will be .NET. But the concepts are universal, so don't let that scare you off.
Event Sourcing is becoming more mainstream these days, and a lot of conferences have demonstrated the pros and cons of this artitecture style from multiple angles. I've done a few of these and publishes a lot of articles on best practices and solutions to common problems. But what nobody did was to show you how to build an event sourced systems in the .NET ecosystem. There are a lot of open-source projects that you can use, so you'll need to find a right mix and your own code to that. So let me show you how I would implement this in .NET with almost 10 years of Event Sourcing experience behind my belt.
Over the years, I've noticed that a lot of architectural decisions that I make are based on gut feelings. You know that feeling when you've heard some of the details and angles on a problem, you start to build up this direction in the back of your mind? It's never perfect, but gives you a general feeling of how to approach the problem? Well, I've also learned that these are design heuristics, principles and guidelines that don't always apply, don't guarantee a solution, but are enough to get you going. With almost 24 years of (professional) experience, I'd like to share the design heuristics I use in my day to day job as a software architect. It won't solve all your problems, but give you enough tools to get you going on your next assignment.
Frameworks are supposed to help you build things more quicker and hide a lot of complexity around cross-cutting and infrastructural concerns. They are supposed to make it easier for inexperienced developers to join a running projection. But frameworks also introduce a lot of magic, and that magic is going to backfire at some point. At least, that's my experience. And when it backfires, your code is so entangled with that framework, that you can't get rid of it anymore.
So, instead of building and using frameworks, build and use libraries. That's easier said then done, so let me share some of the practices I use to build composable libraries myself. I'll talk about principles of package design and scoping, keeping your NuGet package dependencies in check, and how to use layered APIs to increase usability without hiding the magic.
If I have to name a single hype in software architecture land then I would have to mention the micro-service architecture. Microservices are supposed to be small, have a very focused purpose, can be deployed independently, are completely self-supporting and loosely coupled. Ideally, microservices are technology agnostic, but hey, we're in the .NET space, aren't we? And they are not a goal, but a means to an end. In fact, a microservice architecture has many benefits and are a great strategy for decomposing a monolith. So how do you build a microservice? What technologies does the .NET realm offer for us? And what if you don't want to deploy them independently? In this talk, I'll show you some of the pros and cons of microservices and how you can leverage Event Sourcing, OWIN and .NET to move your monolith into a bright new future.
After several in-depth talks about Event Sourcing, I realized that there's a large group of developers that may have heard about Domain-Driven Design, Command-Query Responsibility Segregation and Event Sourcing, but have not really connected the dots yet. So in this talk, I'd like to take a practical example of a simple domain and gradually introduce functional requirements to see how the principles behind DDD affect the way your entities are going to protect the business rules. After that, I'll introduce some real-world non-functional requirements and see if and how Event Sourcing and/or CQRS may or may not help to accomplish those.
Over the last 10 years, I've been maintaining several .NET open-source projects, of which one, Fluent Assertions, has crossed the 90 million NuGet downloads. This may sound like a trivial thing, but I can tell you first hand that maintaining a successful open-source project requires patience, perseverance and a lot of time. I.o.w. the exact same challenges that you face in your real-life day job, especially if you've been trying to break down your monolith into smaller components, libraries and services maintained by various folks. I'll start with a bit of history about this project to help you understand how similar it is to real project. Then I'll dive into the characteristics of a great library or component, .NET framework compatibility, the branching and release strategy, and the build pipeline. But I'll also cover some aspects of (internal) marketing, dealing with cross-team contribution, documentation and support aspects. I'll wrap up with design guidelines that I now apply on all my internal projects.
Enough books have been written, articles have been published and Youtube videos have been posted about how you should be behaving as a software developer, Scrum Master or team lead. To be original in this unoriginal world, I started thinking about the things you should NOT be doing. I came up with 30 tools, practices and principles covering topics like dealing with complexity, productivity, architecture, testability, traceability, maintainability, transparency and predictability that I believe hamper you as a professional. That sounds like a lot, but I promise you a fun and entertaining session that I'm sure will trigger a lot of recognition.
Writing maintainable code is not something that is easy to do. Not only is a pretty subjective, a lot of the techniques like Clean Code, SOLID and such can be misinterpreted resulting in unconstructive dogma. I've made it my career's goal to find a good balance between all those different patterns, principles, practices and guidelines. In this talk, I like to share you my own ideas on what it takes to write maintainable code. I'll talk about architecture, class design, automated testing and any consequences on your development process. If you care about your code, be there with me.
Covers topics like this
• Characteristics tests
• A proper IDE like Rider to
○ detect code inefficiencies and dead code
○ render project diagrams
○ Quickly navigate
• Editorconfig / eslint so auto-format works
• Nuke build pipeline to get consistency
• GitHub actions build
• Adopt a version strategy and name branches accordingly
• Adopt automatic versioning using Github
• Add API verification tests
• Functional folder structure
• Architecture slide
• Enable nullable types per file
• Reduce the scope of types and members to find more dead code
• Move new code to .NET Standard projects or cross-compile
• Switch to SDK-style csproj
• Add scripts or builds steps to run the code locally
• Pulumi to automate PR deployments
• Adopt structered logging
• Use Dependency Injection
• Introduce interfaces and delegates without going overboard
• Add architecture dependency frameworks
• C4 Model
• Use test containers
Hands-on architect in the .NET space with 26 years of experience on an everlasting quest for knowledge to build the right software the right way at the right time
The Hague, Netherlands