Dennis Doomen
Hands-on architect in the .NET space with 27 years of experience on an everlasting quest for knowledge to build the right software the right way at the right time
The Hague, The Netherlands
Actions
Dennis is a Microsoft MVP and Principal Consultant at Dutch Microsoft consultancy firm Aviva Solutions. With 27 years of experience under his belt as a software architect and/or lead developer, he specializes in designing full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, documenting, deploying and maintaining software systems in an agile world. He is the author of Fluent Assertions, an assertion library to make your unit tests look great, Liquid Projections, a set of libraries for building Event Sourcing projections and he has been maintaining coding guidelines for C# since 2001. You can find him on Twitter, Mastodon and BlueSky.
Links
Area of Expertise
Topics
Design Patterns for implementing Event Sourcing in .NET
Event Sourcing is becoming more mainstream these days, and a lot of conferences have demonstrated the pros and cons of this artitecture style from multiple angles. I've done a few of these and publishes a lot of articles on best practices and solutions to common problems. But what nobody did was to show you how to build an event sourced systems in the .NET ecosystem. There are a lot of open-source projects that you can use, so you'll need to find a right mix and your own code to that. So let me show you how I would implement this in .NET with almost 10 years of Event Sourcing experience behind my belt.
My 30-ish Laws of Test Driven Development
About 15 years ago, I got inspired by the practice of Test Driven Development. Now, with many years of great and not-so-great experiences practicing Test Driven Development, I thought it is the time to capture my own “laws”. The term "law" is obviously an exaggeration and "principles or heuristics" cover my intend much better. Either way, I want to talk about the scope of testing, using the observable behavior, naming conventions, state-based vs interaction-based testing, the impact of DRY, patterns to build useful test objects and some of the libraries and tools I use. In short, everything I try to follow every day I write, review or maintain code.
Design for testability
1. Start with the class design before you write your first tests
2. Then use the tests to drive the design further
3. Use functional boundaries
4. Things in adjacent folders usually mean they are separate boundaries
5. It's fine to inject concrete classes inside boundaries
6. Use the DIP to decouple code and make it more testable
7. Apply DRY within those boundaries
8. Don't bother defining the difference between "unit" and "integration". Use "appropriate-sized tests"
Scoping
1. Align your test scope with those internal boundaries…
2. But it's fine to test smaller sometimes
3. Test things that are designed for reusability separately.
4. Don't test implementation details separately, test them as part of the reusable scope
5. Test the real surface (HTTP APIs, not using the database)
https://www.continuousimprover.com/2023/03/test-http-contracts.html
6. It's fine to include the database in tests
https://www.continuousimprover.com/2023/03/docker-in-tests.html
7. Use the right style for the right type of tests (AAA vs BDD-style)
Test design guidelines
1. Treat tests as first-class citizens of your code base
2. Allow developers to use your tests as documentation
3. Use mocking frameworks between boundaries, but not internally
4. Don't return mocks from mocks
5. Make sure the test shows the arrange, act and assert explicitly
6. Hide things that are not important and show things like routes, the test data that are important to understand that specific test
7. Ensure the test succeeds or fails for the right reason
8. Prefer literal strings and in-line constants over pre-defined constants
9. Don't use production code in your assertions. The goal is to protect the contract, not to keep the test refactoring friendly.
10. Only assert what's relevant for that test case (e.g. Using anonymous types, string wildcards, etc)
11. Make sure the assertions provide enough information without having to go through the debugger hell
Naming and organization
1. Postfix your test classes with Specs to emphasize the specification part
2. Group tests by API or purpose using a nested class
3. Use a short fact-based name
a. https://www.continuousimprover.com/2023/03/test-naming.html
b. Don't include the names of code elements, classes or methods in them
c. Avoid should/then
Make sure it focussed on what the test is supposed to validate, not how it does that
10 years of Event Sourcing; thoughts and experiences
Having designed, build and run systems that are deployed on oil platforms all over the world and that use Event Sourcing as their underlying architecture means that I've experienced all the greatness and the pains this architecture style has to offer. In this talk, I'm going to share you my experiences with over 10 years of practicing Event Sourcing in multiple systems. I'll talk about the reasons we adopted it in the first place, the challenges we had introducing this to the developers, the problems we ran into in production, and the lack of confidence we got from management. I'll conclude this talk with my current thoughts on Event Sourcing and how I would implement it when I would have to do it again.
50 reasons why JetBrains Rider made me a more efficient C# developer
As long as I’ve been developing in C#, I remember I’ve been looking for add-ons to make me more productive. In 2004, I discovered JetBrains’ ReSharper for Visual Studio. It improved Intellisense, the coloring of identifiers, the many built-in refactorings, it felt like a new world opened up to me. But I also noticed that with every new Visual Studio and ReSharper release the memory and CPU footprint increased a lot. In 2016, JetBrains announced Rider, a full-blown IDE for .NET and C# developers based on there wildly successful IntelliJ IDE and all the power of ReSharper. By the end of 2017, I fully switched to Rider and I never looked back.
Since then, the competition has been running behind more and more. By now, Rider is the most refined IDE you can wish for, and the improvements and features just keep coming and coming. Think of things like a predictive debugger, a very powerful integrated Git client (supporting interactive rebased), unrivaled code navigation and refactoring options and a solution-wide code analysis. So in this talk, I'd like to show you at least 50 reasons why Rider made me more productive
What you can learn from an open-source project with 350 million downloads
After more than 10 years of development, our pet project, Fluent Assertions has almost reached the 250 million downloads. Providing a high quality library like that doesn't come for free. We've been trying to write code that is clean enough for our contributors, write tests that are self-explanatory, ensure breaking changes are strictly controlled and try to make it easy to use.
In this talk, I'd like to share the tools and techniques we have been using, how they've enriched our day jobs, and how they may do that for you too.
I'll talk about the release strategy, documentation, versioning, naming conventions, code structure, the build pipeline, automated testing, code coverage, API change detection, multi-targeting and more.
Covers: GitVersion, GitFlow, semver, Chill, test naming conventions, Nuke, GHA, Coverlet, Approval Tests, GH release notes, Jekyll/Minimal Mistakes, multi-targeting, C# 10, Rider, Jetbrains Annotations, editorconfig/Stylecop, TDD, design guidelines, decision logs, mutation testing
Getting a grip on your code dependencies
I'm sure every developer wants to be able to change code with confidence and without fear. Readable and self-explanatory code is one aspect of that. Too much coupling is another major source of problems that prevent you from changing one part of the system without causing side-effects. In this talk, I'd like you to show you common sources of unnecessary coupling and offer you options to help prevent and/or break those. I'll talk about how principles like Don't Repeat Yourself and Dependency Injection can be a double-edge sword, how to detect too unnecessary dependencies and how to use the Dependency Inversion Principle to unravel some of those. And yes, I will also talk about controlling dependencies on the package level.
@law of demeter
@using DRY
@Using DIP
@Dealing with ugly dependencies by applying DIP and an adapter
@SQ rule to detect too many dependencies
@Dependency management at the package level
@Dependency injection is great, but try to avoid a global container
@ASP.NET core at the module level
Common pitfalls of Event Sourcing and how to address them
Plenty of great content has been written about the problems Event Sourcing can help you solve. But after almost 10 years of using Event Sourcing in a distributed occasionally connected environment, I've experienced a lot of real-world challenges. In this talk, I'm going to talk about common challenges I've run into and the options I use to address them. I'll cover aggregate boundary challenges, problems finding the invariants, cross-domain communication, value types and events, versioning of events, eventual consistency of projections, projection bugs, time to rebuild projections, supporting blue/green deployments and more.
16 practical design guidelines for successful Event Sourcing
A couple of weeks ago I ended up in a technical debate on how to take an existing Event Sourced application further to fully reap the benefits it is designed to give you. I’ve written many posts about the pitfalls, the best practices and how to implement this in .NET specifically. But after 10 years of Event Sourcing, I still think it is useful to provide you with a list of the most important guidelines and heuristics that I think are needed to be successful with Event Sourcing. I'll talk about aggregate boundaries, appropriately-sized events, cross-domain communication, optimizing projection speed, aggregating data and more.
10 problems that Event Sourcing can help you solve
I regularly end up in a discussion on whether Event Sourcing is the right architecture style or not. As the universal answer to this question tends to be “it depends”, I started thinking about the typical problems where I would use Event Sourcing. With about 10 years of experience building and running Event Sourced systems, I came up with 10 examples of where Event Sourcing shines, some more functional and some more technical.
What is the right "unit" in unit testing and why it is not a class?
Whether you're just writing your first unit test or have been practicing Test Driven Development for many years, I'm sure your biggest challenge is to find the right scope for testing. Unfortunately a lot of books and guidance seem to imply that every class or type should be tested separately. Well, I can tell you from 15 years of unit testing experience that that's the best way to shoot yourself in the foot.
In this session, I'm going to show you some concrete examples and why I chose a certain boundary for the "unit" in unit testing. I will also share design heuristics I use myself to find those boundaries and why those are not guidelines. I'll also illustrate how principals like DRY and SOLID can be both a blessing and a curse that may lead to the wrong design choices.
It gives the audience food for thought on how the internal boundaries of your architecture can influence the scope of automated testing, how different layers of testing are complimentary, and why the "class-as-a-unit" is often wrong.
The strengths and weaknesses of dependency injection
Concepts like the Dependency Inversion Principle, Inversion of Control containers and Dependency Injection libraries are often mixed up, but they are not the same. You shouldn’t use a DI library until you understand the problems your solving, nor is it always the best solution.
Since I’ve felt this pain many times myself, I’d like to use this session to create some clarity. First, I’ll demonstrate the reason why people typically use libraries like Unity and Autofac in their code bases. Then I’ll show you that those same people often use the Dependency Inversion Principle in the wrong way (and need to revert to mocking libraries to compensate). Then I’ll show you how to apply the principles properly by carefully designing the seams in your architecture. And finally, I’ll show you how an advanced library can connect the dots in a very elegant way.
Software architecture lessons I learned from my latest monolith
After having been in charge of building a large distributed enterprise-ready web-based system for over seven years, it is time to reflect on that period and share some of the most important lessons I've learned on software architecture and agile software development within the .NET realm. Could I have avoided the monolith in the first place? And what about domain modeling, technical design principles and design patterns? Do they still have merrits? And don't forget .NET coding practices, testing practices, data management and deployment techniques. In other words, what do I think I should do again or differently the next time I need to design a complicated high-performance system.
Slow Event Sourcing reprojections? Just make them faster!
I don't have to tell anybody how awesome Event Sourcing is as an architecture style. But those that have been using it in production must have experienced the pains of keeping the upgrades fast enough, especially if you use a more traditional relational database like SQL Server. I've once heard somebody say: "If it's slow, make it faster", but rebuilding a big projection is simply a very expensive operations and involves a lot of network traffic between the application and the database servers.
Over the years since we adopted Event Sourcing, we've been experimenting a lot and have implemented various improvements to make those reprojections faster. And we haven't stopped. We're already working on some new ideas lately. So in this session, I'd like to share those techiques, their pros and cons and how we implemented them on a more detailed level.
60 lessons from three decades of software development
I recently was asked what I would do differently if I had the experience I have now. This got me thinking about the mistakes I've made, the traps I fell for and the dogmatism I became part of over the last 28 years of professional software developer in the Microsoft realm. In this talk, I'll be going down memory lane with you and share everything I've learned on architecture, tools, technologies, agile software development, maintainable code, automated testing, documentation and anything else that I think you should know. Prepare yourself for a full hour of stories, both good and bad.
Based on my LinkedIn series https://www.linkedin.com/in/dennisdoomen/recent-activity/all/
Event Sourcing Done Right - Experiences from the Trenches
Over the years I've spoken many times about what Event Sourcing is and shared many of the good, the bad and the ugly parts of it in blog posts and various talks. However, I've never talked about how to actually build a system based on this architecture style. I keep getting the same questions over and over again. Like when to apply Event Sourcing and at what architectural level. How to deal with transactional boundaries within and outside the domain. How to build projections that are autonomous, reliable and self-supporting. How to deal with upgrades and blue-green deployments. But also on how to handle bugs, design mistakes and crashing projections. Having made a lot of these mistakes myself over these years, it's time to share my current thoughts and opinions about this. Since the .NET space has a pretty rich set of open-source projections to support this, the examples and code will be .NET. But the concepts are universal, so don't let that scare you off.
Design heuristics that make you a better software developer
Over the years, I've noticed that a lot of architectural decisions that I make are based on gut feelings. You know that feeling when you've heard some of the details and angles on a problem, you start to build up this direction in the back of your mind? It's never perfect, but gives you a general feeling of how to approach the problem? Well, I've also learned that these are design heuristics, principles and guidelines that don't always apply, don't guarantee a solution, but are enough to get you going. With almost 24 years of (professional) experience, I'd like to share the design heuristics I use in my day to day job as a software architect. It won't solve all your problems, but give you enough tools to get you going on your next assignment.
Build libraries, not frameworks
Frameworks are supposed to help you build things more quicker and hide a lot of complexity around cross-cutting and infrastructural concerns. They are supposed to make it easier for inexperienced developers to join a running projection. But frameworks also introduce a lot of magic, and that magic is going to backfire at some point. At least, that's my experience. And when it backfires, your code is so entangled with that framework, that you can't get rid of it anymore.
So, instead of building and using frameworks, build and use libraries. That's easier said then done, so let me share some of the practices I use to build composable libraries myself. I'll talk about principles of package design and scoping, keeping your NuGet package dependencies in check, and how to use layered APIs to increase usability without hiding the magic.
A practical introduction to DDD, CQRS & Event Sourcing
After several in-depth talks about Event Sourcing, I realized that there's a large group of developers that may have heard about Domain-Driven Design, Command-Query Responsibility Segregation and Event Sourcing, but have not really connected the dots yet. So in this talk, I'd like to take a practical example of a simple domain and gradually introduce functional requirements to see how the principles behind DDD affect the way your entities are going to protect the business rules. After that, I'll introduce some real-world non-functional requirements and see if and how Event Sourcing and/or CQRS may or may not help to accomplish those.
30 things that a software development should not be doing
Enough books have been written, articles have been published and Youtube videos have been posted about how you should be behaving as a software developer, Scrum Master or team lead. To be original in this unoriginal world, I started thinking about the things you should NOT be doing. I came up with 30 tools, practices and principles covering topics like dealing with complexity, productivity, architecture, testability, traceability, maintainability, transparency and predictability that I believe hamper you as a professional. That sounds like a lot, but I promise you a fun and entertaining session that I'm sure will trigger a lot of recognition.
A lab around the principles and practices for writing maintainable code
Writing maintainable code is not something that is easy to do. Not only is a pretty subjective, a lot of the techniques like Clean Code, SOLID and such can be misinterpreted resulting in unconstructive dogma. I've made it my career's goal to find a good balance between all those different patterns, principles, practices and guidelines. In this talk, I like to share you my own ideas on what it takes to write maintainable code. I'll talk about architecture, class design, automated testing and any consequences on your development process. If you care about your code, be there with me.
Tools and practices to help you deal with legacy code
We all love to build greenfield projects, but the reality is that in most jobs you have to deal with a lot of legacy code. This doesn't mean the code is bad. It just means that choices were made that were the right ones at that time, or that the developers were not entire up-to-date with modern development practices. And that's exactly what this talk is about. I enjoy taking such a codebase and gradually introduce architectural seems, add a proper build pipeline, introduce temporary tests and then gradually refactor the codebase to combine more maintainable, testable and reliable. So in this talk, I'd like to unfold my extensive toolbox of practices, principles, tools and mindset to help you improve your legacy code without jeopardizing your business. Most of it will be .NET focused, but quite a few also cover JavaScript/TypeScript.
Covers topics like this
• Characteristics tests
• A proper IDE like Rider to
○ detect code inefficiencies and dead code
○ render project diagrams
○ Quickly navigate
• Editorconfig / eslint so auto-format works
• Nuke build pipeline to get consistency
• GitHub actions build
• Adopt a version strategy and name branches accordingly
• Adopt automatic versioning using Github
• Add API verification tests
• Functional folder structure
• Architecture slide
• Enable nullable types per file
• Reduce the scope of types and members to find more dead code
• Move new code to .NET Standard projects or cross-compile
• Switch to SDK-style csproj
• Add scripts or builds steps to run the code locally
• Pulumi to automate PR deployments
• Adopt structered logging
• Use Dependency Injection
• Introduce interfaces and delegates without going overboard
• Add architecture dependency frameworks
• C4 Model
• Use test containers
Lap around the AI tools I use to be more productive as a software developer
These days, Artificial Intelligence is everywhere and has had a huge influence on our world. As a developer, there are now numerous AI tools available to improve your productivity. You can look at them with fear, but you can also, as I now do, fully embrace them and use them as tools to make yourself even more productive.
In this hands-on session without too many slides, I want to show you which tools I currently use and what I actually do with them in my daily life as a software developer.
22 reasons for switching from Azure DevOps to GitHub
As an open-source maintainer for over 15 years, and an open-source project with over 300 million downloads on NuGet, I like to believe I know what it takes to have large numbers of people contribute to a code-base efficiently. Next to that, I've been a consultant for almost 27 years helping organizations to get the most out of modern software development efforts. As such, I regularly work with Azure DevOps (AZDO), GitHub and even BitBucket and have been able to experience their differences first-hand. In this talk, I'm going to give you 22 reasons why you should switch to GitHub.
Using Boundary-Driven Development to beat code complexity
As developers we are trained to apply principles like SOLID, Unit Testing and DRY, are lured into adopting certain architecture styles such as Event Sourcing or Clean Architecture, and are influenced to move away from monoliths to microservices. The truth is that all of these have value, but all can be applied the wrong way resulting in the opposite effect.
I believe that understanding or carefully designing the internal boundaries of your code base is the key ingredient to prevent that notorious Big Ball of Mud. This is not a trivial feat, so in this talk I will show you why I believe the importance of boundaries and provide you with heuristics and examples to (re)design your own boundaries to end up with a healthy codebase, whether you prefer to keep that monolith or go for a microservices approach.
• Help find the unit of testing
• Help find the scope of a DI container
• Help find applying DRY with causing coupling
• Help determine when to use an abstraction and when not
• Helps to figure out which architecture to use where
• Code that isn't supposed to be reused doesn't have to be in its own class.
15 years of insights from a TDD practitioner
Unit Testing and its more proactive version Test Driven Development have both opponents as well as proponents. And I get why. It's not easy to do, and if you do it wrong, it hurts. I've shot myself in the feet more than once. But if you do it right, you'll never want to go back to a situation where you don't write tests for all your code. That's what let me to create Fluent Assertions and become the maintainer of ChillBDD.
But TDD doesn't mean you really need to write all tests upfront. The reality is much more pragmatic than the books like you to believe. And this isn't just about test code. A big chunk (if not all) of unit testing and TDD is about designing your solution to be testable.
In this talk I'll share everything I've learned over 15 years of practicing Test Driven Development.
Covers topics like
* The value of unit testing and TDD
* How to be pragmatic with TDD
* How archiecture and code design affects testability
* How to find the right scope of testing
* Tips to write tests that are self-explanatory
Partially based on
* https://www.continuousimprover.com/2023/04/unit-testing-scope.html
* https://www.continuousimprover.com/2021/10/laws-test-driven-development.html
Building well-documented versioned HTTP APIs in ASP.NET Core
Anybody that has built HTTP or REST APIs, has had to deal with things like OpenAPI or Swagger. Those are reasonably trivial to implement using ASP.NET Core, and you even get a nice Swagger UI website out of the box.
Just like any interface or programmatic contract, there are plenty of aspects that need careful analysis. Think of comprehensive online documentation for your APIs, using HTTP error codes appropriately, and providing clarity on what fields in the JSON are required and which are optional or can be empty. And don't forget versioning. Allowing a developer to view and switch between the supported versions and make it clear what APIs are deprecated isn't as trivial as you may think.
In this talk, I'll show you everything that you might need to know about building nicely documented, properly versioned HTTP APIs that make your consumers happy.
* What are important elements of a good HTTP API
* What HTTP error codes should be used for what
* Options for versioning HTTP APIs (url, query parameters, headers)
* Capturing all of this in ASP.NET Core using attributes, XML comments, and the versioning APIs
* Adding customizations to support marking APIs as obsolete and switching between versions
My recipe for building teams that build the right thing the right way at the right time in .NET
Some people may know me as the author of an open-source project with almost 400 million downloads, but my real job is to help my clients to optimize everything that has to do with software development. This covers tools, the architecture, the way of working, the infrastructure, and sometimes even the culture of the development teams.
I usually do that by joining the development teams as a hands-on architect and working with them on their day-to-day tasks, while at the same time identifying anything that can be improved. This includes everything that is needed to make sure they don't only build the right thing for their business, but also apply the right engineering principles applicable to the stage of that "thing". In other words, train them on writing high quality fully testable code that is automatically deployed in the cloud.
In this talk, I want to share with you the mindset I follow in this endeavor, the tools I use, the practices, patterns & principles I follow and the heuristics I apply. In essence, everything I do to make the development teams successful with software development.
* Visualizing the development flow from idea to production
* Using an agile process
* How to document decisions
* Architecture styles
* Test Driven Development, Domain Driven Design, Event Storming
* Design heuristics such as "Reversible deciisons"
* NUKE for treating your pipeline as code
* Pulumi as infra-as-code platform
* Prettifying code
* Code quality tools (Roslyn, editorconfig, eslint, Sonar, Qodana)
* Coding and design guidelines
* Structuring code along functional boundaries
* Clean source control
* Code documentation
* Design patterns (and when NOT to apply them)
How we use C#, Nuke and Pulumi to deploy to the cloud without obscurity
Achieving continuous deployment or continuous delivery is hard. Not only do you need to make sure your codebase and tool chain have a certain level of maturity, automating the actual build, test and deployment pipeline can be quite hard. Fortunately it's 2024 and we have two awesome tools for achieving that illustrious goal in the .NET realm. Nuke is a C#-based framework to encapsulate all the steps you need to do to compile, test, verify and package your code instead of relying on YAML magic. Pulumi is a framework for provisioning cloud infrastructure using C#. And no, you don't need to any obscure JSON or custom DSL syntax that products like Terraform or Bicep use. Both products support the same inline documentation, debugging, refactoring and other capabilities we are so used to as .NET developers.
So join me to see how to use these tools to build a deployment pipeline that grows with your code-base and bring you into the modern world of continuous deployment.
* C# build pipeline using Nuke that triggers Pulumi to deploy a dokcer container on Azure
* Github actions triggering that pipeline based on a label
* Demonstrate code navigation, refactoring and debugging
How to build well-designed and reusable NuGet packages
One of the fundamental building blocks in the .NET space is the NuGet package. It's the perfect mechanism to break down those big monolithical systems into nicely structured building blocks that are easy to understand and easy to maintain. But finding the right boundaries for those packages isn't always trivial. If you take the wrong path, you'll end up in a dependency hell. Content-only packages are a nice solution for that as well, but not easy to build. Also, making sure that those packages are properly versioned, don't have vulnerabilities, have proper release notes are aspects a lot of devs forget about.
In this code-heavy talk I'm going to build a GitHub repository with multi-targeting, a maintainable build pipeline, automatic versioning, basic elements such as .editorconfig, unit testing, coverage reporting. This package will be protected by Dependabot and will have automatically generated release notes. Next to that, I'll share my principles of package dependencies to help you scope those packages. In short, everything you didn't know you needed to know about building professional NuGet packages.
Build and packaging pipeline
Multi-targetting
Content-only
Versioning
Documentation
Dependency management
.editorconfig
Gitattributes
Directory.buildprops
GitVerson
Coverage
Github
Dependenabot
Release Nptes
Developer Week '24 Sessionize Event
NDC Sydney 2024 Sessionize Event
NDC London 2024 Sessionize Event
Bitbash 2024 Sessionize Event
FreshMinds
Practical introduction to DDD, CQRS & Event Sourcing
.NET Zuid
Lap around the AI tools I use to be more productive as a software developer
Swetugg Gothenburg 2023 Sessionize Event
NDC Porto 2023 Sessionize Event
Techorama Netherlands 2023 Sessionize Event
Developer Week '23 Sessionize Event
NDC Oslo 2023 Sessionize Event
Craft Conference 2023
My 19 Laws of Test Driven Development
Code Europe 2023
Getting a grip on your code dependencies
What is the right "unit" in unit testing and why it is not a class?
DOTNED SATURDAY 2023 Sessionize Event
.NET Conf 2022 1nn0va Sessionize Event
.NET Developer Conference '22 Sessionize Event
Update Conference Prague 2022 Sessionize Event
KanDDDinsky 2022 Sessionize Event
Developer Week '22 Sessionize Event
Future Tech 2022 Sessionize Event
DOTNED SATURDAY 2022 Sessionize Event
DotNED
.NET Developer Conference '21 Sessionize Event
Domain-Driven Design Europe 2020 Sessionize Event
KanDDDinsky Sessionize Event
Techorama Netherlands 2019 Sessionize Event
KanDDDinsky Sessionize Event
Techorama NL 2018 Sessionize Event
Dennis Doomen
Hands-on architect in the .NET space with 27 years of experience on an everlasting quest for knowledge to build the right software the right way at the right time
The Hague, The Netherlands
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top