YTread Logo
YTread Logo

Clean Architecture with ASP.NET Core 3.0 • Jason Taylor • GOTO 2019

Jun 09, 2021
welcome to the

clean

architecture

3.0 course with asp.net my name is Jason Taylor and I am an SSW solutions architect. I have been developing for 19 years and I have learned that the most important principle is to kiss or keep it simple, stupid and Today I will show you the simplest approach to developing enterprise applications with a

clean

architecture

. Let's start with a clean architecture, the domain and application layer are at the center of the design, this is known as the system

core

and domain. contains business logic and types and the application layer contains business logic and types, the difference is that the business logic could be shared among many systems while the application logic or business logic is now specific to this system instead of depending on Concerns such as data access and infrastructure, we invert those dependencies, so the presentation and infrastructure depend on the

core

, but the core does not depend on any of the layers.
clean architecture with asp net core 3 0 jason taylor goto 2019
Now this is achieved by adding abstractions or interfaces within the application layer which are then implemented outside the application layer in other layers. For example, if we wanted to implement the repository pattern, we would add a repository interface within the application and an implementation within the infrastructure. Now with this design, all dependencies point inward, so you can see that the core does not depend on other layers. Now the presentation and infrastructure depend on the kernel. but not each other and that's very important, we want to make sure that the logic that we create for this system stays within the core if the presentation, for example, depended on an infrastructure to be able to send some notifications, that is logical and that logic now has to appear inside the presentation layer and it has to orchestrate that interaction between presentation and infrastructure, we don't want that to happen because we can't reuse that logic, we want a system that lasts 20 years and if we have a front end with a core web API from asp.net, well that won't be there in 20 years, we need that logic inside the core where it's isolated from all that stuff, so this results in an architecture and design that is independent of the frameworks it doesn't have . requires the existence of some framework or tool, it is testable and it is easy to prove that everything in the kernel is isolated from the outside world and contains the most important logic of the system, so we can 100% unit test that logic, it is independent of the UI, so right now I mentioned that I'm using the core asp.net web API with an angular interface, but you might want to change that we're all getting a little tired of writing JavaScript properly, so we'll change that soon to Blazer. and with all my logic inside the core, that won't be so difficult, it's database independent.
clean architecture with asp net core 3 0 jason taylor goto 2019

More Interesting Facts About,

clean architecture with asp net core 3 0 jason taylor goto 2019...

Right now we are using the sequel server with the same design, we also use Postgres Sequel Lite and we are going to test it. with cosmos DB it is also independent of anything external, in fact the kernel knows nothing about the outside world and that is what makes this design so great that we have all our logic encapsulated inside the kernel isolated from the outside world and that It will make the difference between a system that lasts three years and a system that lasts 20 years. Let's take a look at a couple of examples when I started talking about clean architecture last year.
clean architecture with asp net core 3 0 jason taylor goto 2019
We use Northwind merchants as an example of why, because Northwind merchants are great. and I wanted to show the world now that I wasn't wrong, this repository on github now has close to 3000 stars and almost 1000 forks and as a result of that there is a huge community behind this project and I want to take a moment to thank you for your support thanks for your pull request thanks for the tough questions that have really helped evolve this solution to become what it is today so based on this I went ahead and created a clean architecture solution template so I wanted a file very simple for a new project experience, again kissing, keep it simple, so if you want to present a great file, a new project experience with a clean architecture solution, you can use this template if you want something interesting, check out an author and traitors.
clean architecture with asp net core 3 0 jason taylor goto 2019
Let's take a look at an example so you can see it here in my github repository. You can access the Northwind Traders solution or the clean architecture template. One second please, here we go, okay, very slow. Well, let's try that again, here we go, so this is the cleanup. architecture solution template on github, you can see it's set up as a github template, so I can just Ctrl click on that and create a new repository from the cleanup textures repository, so just give it a name and I'm ready to start. but from my experience I wanted real dotnet core solution template so to start you can run dotnet fresh install clean architecture solution template and then to create a project you can go to new dotnet CA solution dotnet, let's take a quick look at that now, so if I create a new folder, I go to Copenhagen, jump to it, and then we go to dotnet new.
This only needs to be done once or when you want to update the clean architecture point solution template from the package so that we can install the new project template that will be installed and then it will give us a list of all the project templates that are available from the command line so if we scroll up here you can see the clean architecture solution template at the top and its short name is ca - solution so we can run that now and that will create the project so what really does is create a project with all those layers and all the infrastructure ready to go because this is something that I was doing manually every time I had to create a new client project, so if we take a quick look here, you can see that is using the correct namespace, also based on the project terms folder we use, jumping to the source you can see we have four folders, domain application infrastructure and web UI. then going back to testing, you can see that we also have test projects for each of those layers, so what you get out of the box is a template that gets you up and running quickly.
We have already connected some kind of database and asp. .net core identity, we've provided solutions for everything that goes to the application layer, a domain layer and an infrastructure layer, so it's very easy to get started, so I won't run that one because you'll have to restore a bunch of dependencies , but we can take a look at this one here so you can see here is the clean architecture solution template in Visual Studio basically ready to go. There is a readme file that explains a little about this and shows you how to get started running it.
It's the same readme file from the github repository, it's very easy to get started, the key points in this section are that the domain contains enterprise-wide logic and types and that it can be shared across multiple systems. The application layer contains business logic and types and that is specific to the infrastructure of this system. contains all the external concerns and previous versions of this talk and in previous versions of the Northland Trader solution I separated our persistence and infrastructure into two different layers, but I found it easier to join them together. I mean, what's simpler than two projects?
A project. I merged it, many of the dependencies were similar so it was easy, so presentation and infrastructure depend only on the application, but not on each other, and infrastructure and presentation components can be replaced with minimal effort. Let's take a look at the domain layer, inside the In the domain layer we have entities, values, objects, enums, logic and custom exceptions. We're going to take a look at each of them in turn, so for the rest of this talk we'll look at each layer and we'll be making key points on these layers so that, hopefully, if you take a look at this project or the template more go ahead, you can get up and running quickly and understand how everything fits together, so the first thing I want to show you is this to-do entity, so I have a really good sample of to-dos in this project.
I should show you now, so if you press Ctrl F5 F5 in this solution, this is what you will get, basically the initial angular template with a complex to make this component, so it allows you to manage numerous lists and numerous to-do items and there are lots of nice little features there for working with lists and list options and to-do items and the wonderful Of course, one thing about creating a to-do list is that you can also use it to store your requirements as you go once you you get to a certain point so you can see all the things that I've checked there that I haven't checked.
I requested a code review, which is probably bad. I did it. I actually asked for a code review, but it hasn't been done yet, so let's take a look. I also have an open API built into this solution so you can click on API Access. The swagger UI is fully integrated and automated, so with the solution, when you access a new CA solution, it is configured to generate the open API specification automatically, it will render the swagger UI automatically, It will also generate an angular typescript client and all associated data types needed to run the front-end.
I'll get to that later, but the first thing I wanted to look at was this GTO to-do item, so one of the very simple key points I make is that you should not use data annotations within your domain model and because you don't we want to saturate our domain model with data annotations in the past, EF core used them for validation and for object relational modeling, but today the EF court doesn't. Use it for validation or just use it for object relational mapping and the alternative is to use a fluent configuration API which provides a much more powerful configuration system anyway so we can go ahead and remove them and I also want to talk about value objects . so in this solution I have an example of an ad account value object, and you can see that a value object is a complex type and it doesn't have an identity and why should I create an ad account value object player? could store an ad account just as easily as a string, and that's what I want you to think about when you define your entities and define something that is a primitive type.
Think about whether it is really a primitive type. Do all strings represent all values? objects or sorry, all strings represent all ad accounts or ad accounts a little more conflictive than that and the answer of course is which ad accounts are more complicated, they consist of a domain name and a username in a certain format and when We, when working with an ad account, often want to access, say, the domain name or the username or everything, so if we use a string, we have to write the logic associated with the interaction with that valuable item somewhere in our system, now new. developers won't know where that logic is and it will appear throughout the system, but instead of using a complex type, like an ad account value object, we have a place to store all that logic and that makes it very easy for users . to work with our system you don't have to think about how do I access the ad account, just the username part, there is a property for that, unfortunately with any framework or one of the features you have is property entity types and it's very easy. configure that value object using the fluent API configuration so you can see here we have a modeling entity called order that owns a shipping address and what it will do is it will actually be in this order entity when it relates to the relational model in the database will store an order table or an order table with a column called ID and then you will have a column called shipping address underline street and shipping address underline city for the Energy Framework court to take care of everything that for us, so when we use these we value the objects this way and set them using the self type, we don't have to worry about converting it back from our object model to a relational model and the other way is very simple, so which is a great approach, so one of the things I wanted to mention, and again this is about making things easier for new developers, to always initialize their collections so they can see here.
I am initializing the items collection and also recommend deleting them or just setting them to private and why? Doing these things is about forcing developers into the well of success, we want to make it easy for them to do the right thing and the hard thing.do the wrong thing, so if we don't do this, new developers might say I needed to do it. to-do list, so I'm going to go over that to-do list is equal to a new to-do list and then I'm going to initialize the to-do item collection because that's what you're used to, but if we set this to private actually they can't do that and what's more, they never have to worry about whether it's initialized or not because whether they're creating it themselves or it's coming back from a, of course, that collection is ready to go, so There is something they can no longer do.
I have to worry because it makes our code simpler and more concise. One of the things I didn't show you in the ad account value object is an example of a custom domain exception, so you can see here that if for some reason I couldn't create the ad account the account string throws an invalid ad account exception, why should I get it right? It's simple, the invalid ad account exception is much easier to understand and debug than an index out of range exception, okay, so we can create these exceptions for these expected domain type events. I also added some features here just to help people get started.
The auditable entity base class exists. Any entity derived from it will automatically have those fields populated when the changes persist to the database, so it will store the ID of the user who created the entity, the date and time they created it, and if it is being modified , the id of the user who modified it and the date and time I modified it and all that is needed is to simply derive from that property, it's kind of an initial sample, you can definitely turn it into something more complex, now there are some tests too sample units associated with this right now, it's just for value objects, but at least it shows you how to get started so you can see that when you work with a value object it's very easy to create one based on an account string, it only says count to four, we have a two string method that we can guarantee returns the correct format.
We have an implicit conversion operator that makes it easy to convert an account ID to a string simply by using our assignment. We have an explicit conversion operator that makes it easy to convert from a string to an ad account. we have this exception that is thrown in the event that the ad account string is invalid, so when we talk about the value objects and all the logic encapsulated in them, you can see that we have a really simple place to encapsulate a a lot of logic that doesn't need to have some kind of help that new developers can't find, so the key points are to avoid using dart annotations that clutter our domain model and we can use a fluid API setup which we'll see when let's look.
The infrastructure layer uses value objects when appropriate. I just want you to think about whether it is really a primitive type or is it more complicated than that. Well, I have logic associated with that guy. Create custom domain exceptions to make your system easier to work with. Initialize everything. collections and use private setters again to make it easier to work with your system and automatically track changes with the auditable and C base type, so now I'll take a look at the application layer, inside the application layer we have interfaces, our interfaces that are defined within the application layer and then implemented in the external layers, such as infrastructure, our models have view models and DTRS, our logical commands and queries, we'll talk about that shortly, our validators and of course again our custom exceptions, so everyone has already heard of them.
CQRS, but if you haven't, it means command query responsibility segregation and with CQRS we separate our reads from our rights. When people talk about CQRS, they say it's great, it maximizes performance and scalability, but for me it's about simplicity because we're CQRS. It's easy to add new features, I just add a new command or query and it's also easy to keep my changes usually only affect one command or query and that means I have less braking changes, what's more, if your CQRS audience you should too . use mediator, those two are like the perfect match, so with mediator we define our commands and queries as requests and that means the application layer is just a series of request and response objects and that may not seem like a big deal, but that's what happens next. the ability to attach additional behavior to any request that comes into the system, so with the mediator we can attach behavior before and/or after each request, so if we need to implement some cross-cutting concerns at the application layer, we can do so very easily.
So for example, if we want to validate every request that comes through the application layer, we can do that and that's already in this solution, so let's take a look so you can see in the application layer that I basically have a single folder common. and then three function folders, so inside the common folder I have some behaviors, so there's a pavis mediator for those traversal concerns, some custom exceptions, some interfaces to interact with the infrastructure, some mappings which we'll talk more about forward and some models. shared models, but inside this to-do list folder I have some commands and queries.
You can see that everything is very well organized and easy to find, so I have a question here. I have an export to do this query that will export a CSV Export to To Do List as a CSV. I have to ask this question. Let's take a look at this one. It's pretty cool because it's made up of DT OS and display models, so you can check it out here. I have access to what is done is a query and it consists of a query that could also be a DTO, so the query itself is a DTO.
Now I don't have any properties that are passing with this query, but there wouldn't be much to say about ad pagination. and let's say we start adding things like page size and current page and that kind of stuff. Well, some of the commands have properties that you can see. You can see that I have nested its controller with the mediator when we defined our commands. and queries as a request, we're basically defining a request I, which is the query, and a request handler I, so that separates the request from the handling of the request. I'll zoom in a little bit more so you can see.
I've nested this and this is something I've done to improve the discoverability of the queries, so that when you go to the to-do list controller, that's where the query is invoked and you can see here we are. using the mediator to send the query and we all have it for this one because there are no properties that we just specify. What you need to do is query if there are some properties like pagination, then we could just pass them from the client like this or you could use a request object if you prefer, but what I found is that when the developers saw this, they checked f12 in get to do is query and they were coming to a file that contained just the query so it was hard to understand and then find the driver you actually have to go to Solution Explorer and then go down and double click the controller, so nesting the query or simply putting the two classes in the same file improves discoverability, which is great, so let's take a look at this query. so we have a query which is a request and will return a virtual machine with quotas.
Today's virtual machine has everything the view model will need to render there to reduce the view, including a lookup list of priority levels and to-dos. they list themselves which is of type DTO to-do list and eto to-do list contains the DTO properties, wait let's go for the query to return to the quota VM and then the driver says okay, This is what you should do is consult the controller and I will handle it. type requests what they do is query and I return it to the quota virtual machine, then down here we have the constructor or injecting our dependencies and here we have the implementation and what I really like about this, what makes it very simple is that this is the business case of getting a to-do list so obviously there is some requirement behind the company reaching out to me and saying they want to implement that, sorry I pressed l5 I believe and have been able . to encapsulate everything that is associated with that business case in this file and in addition, I also have all the files associated with this business case in this folder, so what I need to change this, I come to this single folder and everything What I need is here.
I don't have to jump to multiple folders. I don't have to jump to a DTS folder or a viewmodels folder. I don't have to go to a drivers folder or a queries folder. It's all here. And what's more, because it's located this way, I actually discourage sharing this code. I don't want people to reuse this code. I want this code to only be used for this business case and you might think it's a bit strange. but if I do that I don't have any breaking changes and if I start sharing it very soon we will start to see conditionals for a point there and they will be conditionals for separate use cases for certain queries and it will just make things more complicated than necessary, so I place these things together, I make it easy to change and avoid breaking changes and I think that's a really good approach, so you can see here we have our commands. very nice and easy to find, we have to create a to do list command, it is also very simple, we have our DTO, all we need to create a to do list is a title, we have the controller and that's all the logic associated with this and creating the to-do list, so I want to show you some of the cross-cutting concerns, so if we move on to the behaviors, you can see that I have three behaviors implemented.
I have a request logger that logs each of them. request that enters the system. I have a request performance behavior that logs warnings for any requests that take too long and I have a request validation behavior that validates every request that comes through the system. It's very simple to set them up, you can see that it is Now with just a bit of code, I'm actually logging every request that comes through the system, including the user ID that initiated the request, the username, and the request itself, and that's not just the name of the request, it's the actual DTO. is the request and all the arguments associated with that request, so if I have a problem I can go to the log.
I can take that DTO. I can create a unit test. I can see that the test fails and I can fix it. something simple, it's 32 lines of code, but it carries a lot of value because remember that everything in the application layer is represented as a request, all of that, all requests come into the system, whether it's through a commander or a query, and I'm logging all of those, plus I'm validating all of them and so I have request validation behavior, so this runs before the request and basically says okay for this request, this dto this command or query, try to find validators and if any, associated with it collect all errors, if any, and then if error count is not equal to zero, throw validation exception and thus , every request that comes into the system is automatically validated, everything my developers need. what to do if they want something to be validated, it's a creative elevator, so here's an example of a validator that uses fluid validation and you can see that it has a rule for its title as part of the create command in the list, it says that the title should not be empty, must be a maximum length of 200 characters, and must be a single title.
You can see here that I'm injecting into my application's database context, which is my interface to my actual database context, and I'm checking the collection. To see, that's a unique title, so this is a really powerful validation that I've created here and that's an important point. Well, we can use data annotations for validation; They tend to be suitable only for simple scenarios and when we want something a bit. a little bit more complex, then we can use flow and validation and it provides the best result for simple scenarios and more complicated scenarios, so the other thing that I've actually implemented in this new version of this template is some mapping behavior, so with Auto mapper I have done it in a way that I am very happy with the result and I have backed it up with unit tests tomake any of those runtime exceptions that used to cause headaches with Auto mapper less likely. to find, let me show you, so if we go to check out this to-do list and take a look at this to-do list, you may have noticed that it's implementing IMAP from two dualists and that's basically saying this to-do. -list DTO can be assigned from a to-do list and is assigned automatically, handles the rest.
Now I also have the DTO to-do item and it says I map from the to-do item, but it won't actually be a convention space mapping. Because automap is all about conventions, it will do a lot for you if you follow the conventions, but then you have to specify a setting so you can see that while to-do lists, DTO didn't have a mapping method, pending tasks. do item GTO has a mapping method and it only specifies some configuration on how to map the priority field now this is actually possible through eight c-sharp interfaces which provide default implementations so I can say that the classes that implement this interface if not I don't specify an implementation for this mapping method then just use the default implementation and the default implementation basically says this will just be an automatic mapping based on inventions so create a map from the entity to the derived type, which is the DTO, so Actually, I don't have to specify anything, just implement an eye map and the entity name is an office.
Also, I didn't want to connect them all manually, so I created this mapping profile and added this method of applying mappings from the assembly. so all these DT OS with eye map are implemented. I have a method that will just scan the assembly, find them all, actually invoke what's called the mapping interface and the mapping will then add that map to the profile so the automap knows about it. On that note, finally, when I go to use that mapping in Mike's to-do list query, I can just say project to to-do list DT, oh, and it does that nested projection for me without me writing any more code than a bit of configuration. for priority over things to do so I'm really excited about that but I couldn't have written that without writing some unit tests so here inside the app unit test I have some mapping tests and they are very simple . have a test that is actually a test provided by the automation framework that basically just says to assert that the configuration is valid for all map types and that will basically say if there are any properties that are not assigned to fail this test and that will prevent a runtime error and that's great and then I have this test which is a theory and I just specify my expected mappings and this whole test is instantiated and then I try to map and if it fails we know that something went wrong with that mapping and so I protected from probably eighteen ninety percent of The problems that you might encounter with this technique I'm also very happy because now I have this dependency injection class in the application layer, which is basically just an extension method on the collection services and it What that allows me to do is simply have an extension. method that connects all the dependencies of my application so I don't have to do it in startups.
The CS starting point tends to bloat a lot if you're just using the built-in dependency injection, so you can see everything here. What I have to do is say that services don't aggregate applications, sorry, that's very good, so the last thing to keep in mind at the application layer is just the unit tests so you can look at this unit test for the behavior mapping, there are unit tests for commands and there are also unit tests for validators, so you have good examples of how to test all those elements within the core. The key points using CQRS and the mediator simplify your overall design, and what's more, the mediator simplifies cross-cutting concerns with it.
Fluent behavior pipeline validation is useful for all validation scenarios, so not only simple scenarios like data annotations or a mapper can be used to simplify your mappings and projections, and this application layer is worry-free. of infrastructure, we interact with infrastructure only with vials of interfaces, so now Let's take a quick look at the infrastructure layer, so inside the infrastructure we have our persistence concerns and our identity concerns, so using asp .net for the identity file system we have some examples of CSV files, the system clock we have an abstraction to work with. the system clock because that is a dependency, and API clients question the unit of work and repository pattern a lot, should we implement these patterns?
So show someone who thinks we should still implement these patterns, so about five people and I'm going to jump in and say the rest of you think we shouldn't implement these patterns well. The interesting thing is that we are fine. That's the right answer, whether we think we should implement them or not. right, we'll talk about that in a second, but first with kernel F is not always the best option because you have a kernel that isolates your code from database changes if the kernel uses a vendor system and it's simply a matter of you bring a new get. package for sequel server or sequel Lite or Postgres or cosmos, whatever we want to use and we can change it so that the database context is actually a unit of work and the database set is actually a repository , so if we're just trying to implement those patterns in order to have those patterns and program that way, we already have that with the F core in the past, for example, if EF 6 and earlier we used to implement the repository and the universe to be able to write effective unit tests We no longer need to do that since we have the EF core in the memory vendor, we also have the Lite sequel from the vendor and it can be really useful to use those tools as testing tools to write unit tests as well that with that in What do the experts think?
First we have Jimmy Jimmy Bogart, creator of automaps and mediator and chief architect at Head Springs, and he says, "I'm overtaking the repositories and definitely abstracting their data layer too much," then we have Steve Smith and Steve Smith as Microsoft. MVP for all time and was regional director at Microsoft for 10 years and he says no you don't need a repository but there are a lot of benefits and you should consider it when Steve said he's been very diplomatic if you've heard him. His podcasts listen to his blog posts and watch and watch his Twitter conversations. It's definitely a Repository Pro and has a lot of really good designs on how to create a great repository and why you should use it.
Then we have John Smith. so he is the author of Energy Framework Core in Action and he says no, the repository slash unit of work pattern is not useful with the F core, so with that in mind, when the experts disagree about what we should do well, it's really simple. I just need to remember that repository and unit of work are just design patterns. Okay, if it solves a problem we have, that's great, use it, if it doesn't solve the problem you have, then don't use it because that would just introduce unnecessary complexity. So what kind of problems could a unit of work and repository design patent solve well?
A very simple problem is dependency on the framework or enemy framework, if we didn't want our solution to depend on that framework then implementing the unit of work and repository pattern would solve that problem. Another problem is that if we wanted to limit access to certain entities, then we would create the order and order details with the order in the added path. We could then create repositories only on the aggregate route and force developers to only update orders with the collection of order details, they would not be able to update an order detail individually and therefore we could have a lot of logic and validation associated with that , so think about any design pattern, it solves a problem I have.
Good use, let's take a look, inside the infrastructure layers we see that we have four folders, we have files, which is the CSV help for example that I have put together, we have identity, which is my concerns related to asp core identity. net. have persistence, which is due to fluid API configurations for my entities, my database context and my database contacts, if we have services, right now I am the date/time sibling service so that we do not depend on the clock of machine. Let's first take a look at the fluent API setup so you can see that it's pretty simple.
Here I have a to-do item configuration and a to-do list configuration and that allows me to define how that entity will be mapped to the relational. model until it arrives, all I have to do is say apply the assembly configuration that is built into the F core now from the core to one and specify the assembly now. You'll notice that I left this based on the model creation statement now in In the past, you didn't actually need to leave that there, so if you wrote code like this you probably skipped it and it worked fine, that's no longer the case if you're working with asp.net.
The core identity is version three and above, it actually has an implementation in the base, so you will leave it like this, otherwise you will run into an error and add it back, so when you create these configurations, I want you to remember . that core is a convention-based framework, so the first thing you want to do is make sure you understand the conventions and not create configurations for the conventions because you're making it more complicated than necessary, for example. you could say something like this, this is my writing, you could say that properties are keys, but you don't have to, the fact that it is called ID means that the call to the power framework will automatically assume that it is a key and so on.
The to-do item entity has a list ID and it is based on the name of that list ID and the name of this property. It is assumed to be a foreign key and you assign it as such always when you are working. with a convention based system, know the conventions because it will make it much easier to work with and means your code will be simpler as a result, so now taking a quick look at the database context, you can see it here. we have our identity concerns hooked up and we also have this chase asynchronous method to save changes that I've overridden and you can see here that this is where I implemented the auditable entity functionality, so basically it looks to see if the is being added or modified entity and then set those properties using these two services, very simple and something you can build on, so inside the identity we have the application user, we have these identity result extensions that I built to do this.
The identity service is a bit simpler, so for now I just created a single identity service a long time ago. You could create a user service and a role service and that kind of thing, but the concerns for this application are relatively simple, but what you can see are the identity services that implement the identity service that is inside the application, so when I need to write logic against the identity, I'm just writing logic against this service so I can change this implementation whenever I want. from asp.net core identity so you can see there are some samples of how to work with identity, so it is important to note that no layer is dependent on infrastructure, it is completely isolated from the rest of the system and also has an extension dependency injection and in this extension whitewash all the dependencies associated with the infrastructure so you can see that there is the database, there is the interface that I used to interact with the database, there is some test configuration for the server identity in the backend, we have many units. tests that work with the identity server and are tested with authenticated drivers, so there are good examples that there is the configuration if it is not part of the test environment and that's it, again, all this logic has been moved from the CS of startup to this extension methods so that all I have to do during startup is just say add infrastructure so you can see that I go through the configuration and the environment to have that information available inside the test folder.
There are some integration tests you can check out. Since it is about testing the logic of the auditable entity, then the keypoint infrastructure is independent of thedatabase. We may change a vendor and choose a different database solution. However, the solution I created is not independent of the core entity framework. I've done everything I can with the entity framework course, so if I decide to move to a different RM in the future, I'll be in a bit of pain there, but I'm okay with that. I've been using EF because I don't even remember how long an API setup about data annotations has been so useful.
It's simply a better approach, it does more, and it means that your domain models that aren't clogged with data annotations prefer and know your conventions. They prefer conventions to configuration. They automatically apply all entity type settings and no layer depends on the infrastructure layer, for example the presentation layer, if we do this the logic will be created in the wrong places and we need it to stay within the core. Finally, the presentation layer, within the presentation layer can be whatever we want. We've kept all the logic inside the core, so what we created is very simple.
We have well defined view models. We have well defined queries and commands so that the interactions are made as simple as possible so that it can be a single page. angular reactive view app could be blazer web api razor MVC pages even web forms if you prefer let's take a look so one thing I want to show is the typical example of a controller, I don't know what happened there so in this In the example of a controller, you see that the database context is injected directly into the controller. The entities are returned from the controller, which is a bit no, it comes with a lot of problems that we don't really want. one has to deal with security and complexity and all the logics are as simple as this logic, all the logic is inside the controller and that's because we didn't give ourselves any other option when we injected something. like a database context, which is such a low level concept with something like a controller, there's no room left to put your logic right, but we've avoided that problem with this solution because we're using CQRS, we've moved all our logic around. in commands and queries in the application layer so this is a typical example but let's take a look at the same controller in the clean architecture solution so you can see with this controller we don't even have a constructor why have a constructor ? if you don't have a constructor, it's simpler, so we just have a base class, the get method basically sends a mediator request.
The mediators come from the base class and that is injected using a property injection, the get by ID method. it's basically two lines of code, one line of code to create some lines of code, maybe you don't need them to update and two lines of code to delete, so there is absolutely no logic there, we have essentially reduced this controller to infrastructure . it does one thing, receives a request and returns a response, so it's as simple as possible and there's no logic there, all the logic has been moved to the relevant commands and team, so you can also see that when we talk about that.
It's a view model where for each query we try to return a well-defined moral view so that when the client receives that answer it has everything it needs to represent that view, whatever it is, and that means that the client doesn't have to perform additional API calls for more information, so encapsulate everything into well-defined view models and simplify again, so I mentioned that the open API runs behind the scenes in this solution and I'll give you a quick look at how it works. configure, I also have a blog post that you can check out, so essentially with open API I prefer to use a loot, so a loot is probably the best toolchain because it's able to generate the specs it's able to generate the clients it's kind of from all-in-one store and has a nice Windows app that allows you to specify those settings so you can get up and running very quickly, so in this solution in the web UI, what I have doing at build time, excuse me, it's that it generates the spec right there and it generates an angular rise there, it's just there and when I talk about this, I talk about how it bridges the gap between the back end and the front end and the reason for this is that all of those few very well defined models and all those well defined commands are created which I generated here.
This is the code that I don't have to write, but if we go down to the bottom here, then I can see that here is my update backlog details command, so I have these types well defined on the backend, so, in Instead of manually creating these versions on the front-end, they are now generated and the angular client to access the web API is now generated. and everything happens automatically. I just have to create the web UI project and all that happens if you build microservers with this approach, you can also use n swag to generate c-sharp clients and then you can publish them to NuGet packages. show them with the other services depending on your approach of course, so one of the things I'm doing in the application layer is something happens, if something happens wrong we throw an exception, so I've created an exception middleware that is responsible. to intercept those application layer exceptions and turn them into something a little more meaningful.
Excuse me, but you'll see here that we have two requests that are handled as a validation exception and the exception is not found, so if a validation exception is found. Basically, we'll take those failures, turn them into a 400 bad request response, and return the result to the client. I'll be releasing a new version of that shortly that will work with the details of the validation issue, so it will be an even better experience than what we have. not found exception and that is basically wrong when you can't find some entity when you try to get something by ID or update or delete something and therefore returns it; turns it into a response code not found and returns the message back to the client so we have the same nice consistent experience at least from the core asp.net perspective, so with those exceptions it's really up to the client how will handle it and in this case I have handled it using a custom exception. middleware with identity, there are only a couple of classes here under services.
I have the current user service that will basically interrogate the clients and find the user ID so I can then pass it to my identity service within the infrastructure and do things with that. user, one of the things I added to the web UI is a bunch of integration testing and there was a lot of work on this just because of the integration with the identity, so if I zoom in here, you can see that. We are using the asp.net core web application factory to connect all our dependencies and spin up a host. We have an integration test wizard as a test version of the current user service.
A test of the end of the date/time service. test version of the identity service, but with all that work done, we actually have some really simple tests that we can write so you can see here that this build test references the handlers, the pending task handler, and the creation, so we have some tests written for that, so you can see, I have to help a method to get an HTTP client, not just an HTTP client but an authenticated client, so I've made it really easy to write these integration testing, so if you want to test at that level where you're exercising the entire core stack of asp.net you can do that and it's simple because it's already set up, so you can see here.
I am authenticating my HTTP client. I am creating a command. I'm getting I'm converting that into request content and then I'm posting that response and just ensuring the success status code so you can see how writing these tests is very simple and what I'm trying to do with these tests is just check what essential. inputs and outputs of the system. I want to make sure that at a high level everything works well and the fact that I can write these tests and I can write them quickly is just fantastic, so here's our elimination test so you can see that I'm basically saying, "Hey , I have a valid id and I want to make sure that if I delete it, it returns this success status code, we don't care that it was deleted from the database or anything, we just want to write." To make sure that at a high level everything is working, we can write other tests to verify that it was removed from the database.
If we have time on this one, given an invalid ID, it basically guarantees that the returned HTTP status code is not found, so again it's basic. system input-output test very fast to write because all the infrastructure is in place so keypoint controllers should not contain any impact, sorry controllers should not contain any application logic it is responsible only of taking a request and converting it into a response and that logical list lives in the application layer, we must create and consume well-defined view models, don't make your clients ask questions, open API bridges the gap between the front-end and the backend and ensberg automates the generation of open API specs and clients and we automate it using a simple msbuild task so we don't have to look at it again, so thank you for coming to my talk today.
If you want to learn more, grab the code and/or install the template and give it a try. Try it. I think you'll find this approach simple to build and maintain from development to production. Thank you.

If you have any copyright issue, please Contact