Great talk, there's a lot I can relate to in here.
I find this topic difficult to navigate because of the many trade-offs. One aspect that wasn't mentioned is temporal. A lot of the time, it makes sense to start with a "database-oriented design" (in the pejorative sense), where your types are just whatever shape your data has in Postgres.
However, as time goes on and your understanding of the domain grows, you start to realize the limitations of that approach. At that point, it probably makes sense to introduce a separate domain model and use explicit mapping. But finding that point in time where you want to switch is not trivial.
Should you start with a domain model from the get-go? Maybe, but it's risky because you may end up with domain objects that don't actually do a better job of representing the domain than whatever you have in your SQL tables. It also feels awkward (and is hard to justify in a team) to map back and forth between domain model, sql SELECT row and JSON response body if they're pretty much the same, at least initially.
So it might very well be that, rather than starting with a domain model, the best approach is to refactor your way into it once you have a better feel for the domain. Err on the side of little or no abstraction, but don't hesitate to introduce abstraction when you feel the pain from too much "concretion". Again, it takes judgment so it's hard to teach (which the talk does an admirable job in pointing out).
Pretty naive question, but what differentiates a "domain model" from these more primitive data representations? I see the term thrown around a lot but I've never been able to grok what people actually mean.
By domain model do you mean something like what a scientist would call a theory? A description of your domain in terms of some fundamental concepts, how they relate to each other, their behaviour, etc? Something like a specification?
Which could of course have many possible concrete implementations (and many possible ways to represent it with data). Where I get confused with this is I'm not sure what it means to map data to and from your domain model (it's an actual code entity?), so I'm probably thinking about this wrong.
A quick example can be found with date. You can store it in ISO 8601 string and often it makes more sense as this is a shared spec between systems. But when it comes to actually display it, there's a lot of additional concerns that creep in such as localization and timezones. Then you need to have a data structure that split the components, and some components may be used as keys or parameters for some logic that outputs the final representation, also as a string.
So both the storage and presentation layer are strings, but they differs. So to reconcile both, you need an intermediate layer, which will contains structures that are the domain models, and logic that manipulate them. To jump from one layer to another you map the data, in this example, string to structs then to string.
With MVC and CRUD apps, the layers often have similar models (or the same, especially with dynamic languages) so you don't bother with mapping. But when the use cases becomes more complex, they alter the domain layer and the models within. So then you need to add mapping code. Your storage layers may have many tables (if using sql), but then it's a single struct at the domain layer, which then becomes many models at the presentation layer with duplicate information.
NOTE
That's why a lot of people don't like most ORM libraries. They're great when the models are similar, but when they start to diverge, you always need to resort to raw SQL query, then it becomes a pain to refactor. The good ORM libraries relies on metaprogramming, then they're just weird SQL.
Not really. It's all about the code you need to write. Instead of wrangling the data structures you get from the ORM which is usually similar to maps and array of maps. You have something that makes the domain logic cleaner and clear. Code for mapping data are simple, so you just pay the time price for writing them in exchange for having maintainable use case logic.
My understanding is, a database model is one that is fully normalized - design tables to have no redundant/repeated piece of information. You know, the one they teach you when you study relational DBs.
In that model, you can navigate from anywhere to anywhere by following references.
The domain model, at least from a DDD perspective, is different in at least a couple of ways: your domain classes expose business behaviours, and you can hide certain entities as such.
For example, imagine an e-commerce application where you have to represent an order.
In the DB model, you will have the `order` table as well as the `order_line` table, where each row of the latter references a row of the former. In your domain model, instead, you might decide to have a single Order class with order lines only accessed via methods and in the form of strings, or tuples, or whatever - just not with an entity. The Order class hides the existence of the order_line table.
Plus, the Order class will have methods such as `markAsPaid()` etc, also hiding the implementation details of how you persist this type of information - an enum? a boolean? another table referencing rows of `order`? It does not matter to callers.
For me domain model means capturing as much information about the domain you are modeling in the types and data structures you use. Most of the time that ends up meaning use Unions to make illegal states unrepresentable. For example, I have not seen a database native approach to saving union types to databases. In that case using another domain layer becomes mandatory.
To me domain model is an Object-Oriented API through which I can interact with the data in the system. Another way to interact would be direct SQL-calls of course, but then users would need to know about how the data is represented in the database-schema. Whereas with an OOP API, API-methods return instances of several multiple model-classes.
The way the different classes are associated with each other by method calls makes evidednt a kind of "theory" of our system, what kind of objects there are in the system what operations they can perform returning other types of objects as results and so on. So it looks much like a "theory" might in Ecological Biology, m ultiple species interacting with each other.
There are certainly times I would love to see a presentation like this reformatted as an article.
I tried pulling out the Youtube transcript, but it was very uncomfortable to read with asides and jokes and "ums" that are all native artifacts of speaking in front of a crowd but that only represent noise in when converted to long written form.
However, nowadays thanks to the recent-ish changes in Twitter and Google, my only chance to have my stuff read by a nontrivial amount of people is hitting HN frontage which is a lottery. It's so bad I even got into YouTubing to get a roll at the algorithm wheel.
It takes (me) a lot of work to crystallize and compress my thoughts like this. Giving it as a talk at a big conference, at least opens the door to interesting IRL interactions which are important (to me), because I'm an introvert.
I can't stress enough how we're currently eating the seed corn by killing the public web.
I just pasted the YouTube link into AI Studio and gave it this prompt if you want to replicate:
reformat this talk as an article. remove ums/ahs, but do not summarize, the context should be substantively the same. include content from the slides as well if possible.
I had the reverse problem a month ago. Greenfield project without existing data, domain model or API. I had no reason to model the API or persistence layer any different than the domain model, so I implemented the same class 3 times, with 2 mappings on top. For what?
Well at some point, you will have API consumers and existing data and you need to be able to change the then-existing system.
There was a comment on here saying this was an implied diss of SQLModel, but now that I came back to reply to it it's gone. Weird. Since it's implied I couldn't find it in the slides.
I wrote and then quickly deleted that comment; I never want to speak negatively publicly about open source projects — projects that people work incredibly hard to build and maintain. I felt my original comment crossed that line.
In any case, there is a slide in the talk that has both the Pydantic and SQL Alchemy logos. As far as I know, there’s only one (somewhat popular) library that ties these two together. I think the speaker makes a persuasive case that data, domain, API, and other models should remain related but distinct.
i've cultivated the perception of what op calls design pressure my whole career as the primary driver behind code and her shape. i think it's the most important aspect of a successful architecture, and it's purely intuition based, which is also why there's no silver bullet. i've seen people take most well intended best practices and drive them into the ground because they lack the design pressure sense.
i believe that design pressure sense is a form of taste, and like taste it needs to be cultivated, and that is can't be easily verbalized or measured. you just know that your architecture is going to have advantageous properties, but to sit down and explain why will take inordinate amount of effort. the goal is to be able to look at the architecture and be able to see its failure states as it evolves through other people working with it, external pressures, requirement changes, etc. over the course of 2, 3, ... 10, etc. years into the future. i stay in touch with former colleagues from projects where i was architect, just so that i can learn how the architecture evolved, what were the pain points, etc.
i've met other architects who have that sense, and it's a joy to work with them, because it is vibing. conversly "best practices or bust" sticklers are insufferable. i make sure that i don't have to contend with such people.
Zen and Art of Motorcyle maintenance is a good reference.
Also, it is good to remember what game is actually being played. When someone comes up with a popularizes a given "best practice", why are they doing so? In many cases, Uncle Bob types are doing this just as a form of self promotion. Most best practices are fundamentally indefensible with proponents resorting to ad-hominem attacks if their little church is threatened.
Code is for communicating with humans primarily, even though it needs to be run on a machine. All the patterns, principles, and best practices is to ease understanding and reasoning by other people, including your future self. Flexibility is essential, but common patterns and shared metaphors work wonders.
That's terribly short sighted. You can have a very clear architecture and code which cannot support the use cases required without almost starting from scratch.
You can also have the most flexible system ever designed, but if the rest of your team doesn't understand it then good luck implementing that required use cases
Sure, both extremes are shortsighted. I wasn't arguing for that, to be clear. I'm just saying clarity and ivory tower architecturing has little value if your system can't actually support the intended use case.
Which is what the person I was replying to said with "Code is for communicating with humans primarily, even though it needs to be run on a machine.". If the primary purpose is communication with other humans we wouldn't choose such awkward languages. The primary purpose of code is to run and provide some kind of features supporting use cases. It's really nice however if humans can understand it well.
That aphorism is completely incorrect. Code is primarily for communicating with a machine. If the purpose was to communicate with humans, we'd use human languages. Lawyers do that.
The code does also need to be understandable by other humans, but that is not its primary purpose.
So why do we have Java, Kotlin, Scala, Groovy, and Clojure, all targeting the JVM? And many such families?
The only thing that matter to the machine is opcodes and bits, But that's alien to human, so we map it to assembly. Any abstractions higher than that is mostly for reasoning about the code and share that with other people. And in the process we find some very good abstractions which we then embed into programming languages like procedure, namespacing, OOP, patterns matching, structs, traits/protocols,...
All these abstractions are good because they are useful when modeling a problem. The some are so good then it's worth writing a whole VM to get them (lisp homoiconicity, smalltalk's consistent world representation,...)
To allow you to write more readable and extensible code, that can solve real problems more effectively. Solving problems is the point of writing code.
Saying that reading code is the point of writing code is crazy, that's like saying the point of writing scripts is to read them, or the point of writing sheet music is to look at it.
No - the point of writing a script is to have it performed as a play, the point of writing music is to hear it and enjoy it. The point of writing code is to run it.
> All these abstractions are good because they are useful when modeling a problem.
Then what do you do after modeling the problem? You solve it! You run the program! Everything is in service to that.
No one does it in isolation. The goal of having a common formal notation is for everyone to share solution unambiguously with each other. We have mathematical notation, choreographic notation, music notation, electric notation,... because when you've created something, you want to share it as best as possible to others. If not, you could just ship the end result and be done with it.
So no the point of writing music is not to hear it and enjoy it. To do so you just find an instrument and perform. You do not to do anything else. But to have someone else to do it, you can rely on their ear, their sights and their memory to pick things up. Or you just use the common notation to exchange the piece of music.
Because a secondary goal of code is communication with other humans. That means readability is still a highly valuable trait. Just not as valuable as the primary purpose.
I'd say code is a machine. Even code in a high-level language. Code machine is somewhat special because its details look like words. This misleads us into believing we can reason with these words. We cannot. We can use them to make the machine itself, but the only way to explain how it works is to write a normal technical description and the normal way to understand it should begin with reading that description. (There's no standard for a normal technical description though.)
While you are obviously right about it not being the primary purpose, here it seems the discussion is about designing for long term maintainability vs just running code.
The person he replied said code is primarily for communicating with other people. I'm not sure how else to interpret that than what is literally written down.
This reminds me of the concept of “forces” [0][1][2] in design-pattern descriptions. To decide for or against the use of a given design pattern, or to choose between alternative design patterns, one has to assess and weigh the respective forces in the particular context where it is to be used. They are called forces because they collectively pull the design in a certain direction. Just a different physics analogy versus “pressure”.
I'm not sure I'd take design advice from someone who thought attr.ib and attr.s were a good idea. On the other hand he points out that DDD is a vacuous cult, which is true.
that's a reference to my attrs library which is what data classes are based on. It originally used
@attr.s
class C:
x = attr.ib()
as its main api (with `attr.attrs` and `attr.attrib` as serious business aliases so you didn't have to use it).
That API was always polarizing, some loved it, some hated it.
I will point out though, that it predates type hints and it was an effective way to declare classes with little "syntax noise" which made it easy to write but also easy to read, because you used the import name as part of the APIs.
I’d call out patternitis and over-OOPification way before I’d criticize DDD. Yes, the latter can go too far, but the two former cases are abused on a much more frequent basis. Happily the pattern crazyness has died down a lot though.
Great talk, there's a lot I can relate to in here.
I find this topic difficult to navigate because of the many trade-offs. One aspect that wasn't mentioned is temporal. A lot of the time, it makes sense to start with a "database-oriented design" (in the pejorative sense), where your types are just whatever shape your data has in Postgres.
However, as time goes on and your understanding of the domain grows, you start to realize the limitations of that approach. At that point, it probably makes sense to introduce a separate domain model and use explicit mapping. But finding that point in time where you want to switch is not trivial.
Should you start with a domain model from the get-go? Maybe, but it's risky because you may end up with domain objects that don't actually do a better job of representing the domain than whatever you have in your SQL tables. It also feels awkward (and is hard to justify in a team) to map back and forth between domain model, sql SELECT row and JSON response body if they're pretty much the same, at least initially.
So it might very well be that, rather than starting with a domain model, the best approach is to refactor your way into it once you have a better feel for the domain. Err on the side of little or no abstraction, but don't hesitate to introduce abstraction when you feel the pain from too much "concretion". Again, it takes judgment so it's hard to teach (which the talk does an admirable job in pointing out).
Pretty naive question, but what differentiates a "domain model" from these more primitive data representations? I see the term thrown around a lot but I've never been able to grok what people actually mean.
By domain model do you mean something like what a scientist would call a theory? A description of your domain in terms of some fundamental concepts, how they relate to each other, their behaviour, etc? Something like a specification?
Which could of course have many possible concrete implementations (and many possible ways to represent it with data). Where I get confused with this is I'm not sure what it means to map data to and from your domain model (it's an actual code entity?), so I'm probably thinking about this wrong.
A quick example can be found with date. You can store it in ISO 8601 string and often it makes more sense as this is a shared spec between systems. But when it comes to actually display it, there's a lot of additional concerns that creep in such as localization and timezones. Then you need to have a data structure that split the components, and some components may be used as keys or parameters for some logic that outputs the final representation, also as a string.
So both the storage and presentation layer are strings, but they differs. So to reconcile both, you need an intermediate layer, which will contains structures that are the domain models, and logic that manipulate them. To jump from one layer to another you map the data, in this example, string to structs then to string.
With MVC and CRUD apps, the layers often have similar models (or the same, especially with dynamic languages) so you don't bother with mapping. But when the use cases becomes more complex, they alter the domain layer and the models within. So then you need to add mapping code. Your storage layers may have many tables (if using sql), but then it's a single struct at the domain layer, which then becomes many models at the presentation layer with duplicate information.
NOTE
That's why a lot of people don't like most ORM libraries. They're great when the models are similar, but when they start to diverge, you always need to resort to raw SQL query, then it becomes a pain to refactor. The good ORM libraries relies on metaprogramming, then they're just weird SQL.
ORM libraries have Value conversion functionality for such trivial examples https://learn.microsoft.com/en-us/ef/core/modeling/value-con...
Not really. It's all about the code you need to write. Instead of wrangling the data structures you get from the ORM which is usually similar to maps and array of maps. You have something that makes the domain logic cleaner and clear. Code for mapping data are simple, so you just pay the time price for writing them in exchange for having maintainable use case logic.
My understanding is, a database model is one that is fully normalized - design tables to have no redundant/repeated piece of information. You know, the one they teach you when you study relational DBs.
In that model, you can navigate from anywhere to anywhere by following references.
The domain model, at least from a DDD perspective, is different in at least a couple of ways: your domain classes expose business behaviours, and you can hide certain entities as such.
For example, imagine an e-commerce application where you have to represent an order.
In the DB model, you will have the `order` table as well as the `order_line` table, where each row of the latter references a row of the former. In your domain model, instead, you might decide to have a single Order class with order lines only accessed via methods and in the form of strings, or tuples, or whatever - just not with an entity. The Order class hides the existence of the order_line table.
Plus, the Order class will have methods such as `markAsPaid()` etc, also hiding the implementation details of how you persist this type of information - an enum? a boolean? another table referencing rows of `order`? It does not matter to callers.
For me domain model means capturing as much information about the domain you are modeling in the types and data structures you use. Most of the time that ends up meaning use Unions to make illegal states unrepresentable. For example, I have not seen a database native approach to saving union types to databases. In that case using another domain layer becomes mandatory.
For context: https://fsharpforfunandprofit.com/posts/designing-with-types...
To me domain model is an Object-Oriented API through which I can interact with the data in the system. Another way to interact would be direct SQL-calls of course, but then users would need to know about how the data is represented in the database-schema. Whereas with an OOP API, API-methods return instances of several multiple model-classes.
The way the different classes are associated with each other by method calls makes evidednt a kind of "theory" of our system, what kind of objects there are in the system what operations they can perform returning other types of objects as results and so on. So it looks much like a "theory" might in Ecological Biology, m ultiple species interacting with each other.
You can model this "theory" in the database itself.
There are certainly times I would love to see a presentation like this reformatted as an article.
I tried pulling out the Youtube transcript, but it was very uncomfortable to read with asides and jokes and "ums" that are all native artifacts of speaking in front of a crowd but that only represent noise in when converted to long written form.
Shouldn't some AI be able to clean that up for you? This seems something LLMs should be well-suited for.
---
FWIW, I'm the speaker and let me be honest with you: I'm super unmotivated to write nowadays.
In the past, my usual MO was writing a bunch of blog posts and submit the ones that resonated to CfPs (e.g. <https://hynek.me/articles/python-subclassing-redux/> → <https://hynek.me/talks/subclassing/>).
However, nowadays thanks to the recent-ish changes in Twitter and Google, my only chance to have my stuff read by a nontrivial amount of people is hitting HN frontage which is a lottery. It's so bad I even got into YouTubing to get a roll at the algorithm wheel.
It takes (me) a lot of work to crystallize and compress my thoughts like this. Giving it as a talk at a big conference, at least opens the door to interesting IRL interactions which are important (to me), because I'm an introvert.
I can't stress enough how we're currently eating the seed corn by killing the public web.
Here's an attempt at cleaning it up with Gemini 2.5 Pro: https://rentry.org/nyznvoy5
I just pasted the YouTube link into AI Studio and gave it this prompt if you want to replicate:
reformat this talk as an article. remove ums/ahs, but do not summarize, the context should be substantively the same. include content from the slides as well if possible.
I had the reverse problem a month ago. Greenfield project without existing data, domain model or API. I had no reason to model the API or persistence layer any different than the domain model, so I implemented the same class 3 times, with 2 mappings on top. For what? Well at some point, you will have API consumers and existing data and you need to be able to change the then-existing system.
[video]
Interesting, perhaps modern conveniences encourage coupling.
No wonder there are so many single-monitor, no-LSP savants out there.
There was a comment on here saying this was an implied diss of SQLModel, but now that I came back to reply to it it's gone. Weird. Since it's implied I couldn't find it in the slides.
I wrote and then quickly deleted that comment; I never want to speak negatively publicly about open source projects — projects that people work incredibly hard to build and maintain. I felt my original comment crossed that line.
In any case, there is a slide in the talk that has both the Pydantic and SQL Alchemy logos. As far as I know, there’s only one (somewhat popular) library that ties these two together. I think the speaker makes a persuasive case that data, domain, API, and other models should remain related but distinct.
Parts of the talk remind me of https://www.amundsens-maxim.com/
ha, I wish I saw that while working on that talk! adding it to the resources!
i've cultivated the perception of what op calls design pressure my whole career as the primary driver behind code and her shape. i think it's the most important aspect of a successful architecture, and it's purely intuition based, which is also why there's no silver bullet. i've seen people take most well intended best practices and drive them into the ground because they lack the design pressure sense.
i believe that design pressure sense is a form of taste, and like taste it needs to be cultivated, and that is can't be easily verbalized or measured. you just know that your architecture is going to have advantageous properties, but to sit down and explain why will take inordinate amount of effort. the goal is to be able to look at the architecture and be able to see its failure states as it evolves through other people working with it, external pressures, requirement changes, etc. over the course of 2, 3, ... 10, etc. years into the future. i stay in touch with former colleagues from projects where i was architect, just so that i can learn how the architecture evolved, what were the pain points, etc.
i've met other architects who have that sense, and it's a joy to work with them, because it is vibing. conversly "best practices or bust" sticklers are insufferable. i make sure that i don't have to contend with such people.
Zen and Art of Motorcyle maintenance is a good reference.
Also, it is good to remember what game is actually being played. When someone comes up with a popularizes a given "best practice", why are they doing so? In many cases, Uncle Bob types are doing this just as a form of self promotion. Most best practices are fundamentally indefensible with proponents resorting to ad-hominem attacks if their little church is threatened.
Code is for communicating with humans primarily, even though it needs to be run on a machine. All the patterns, principles, and best practices is to ease understanding and reasoning by other people, including your future self. Flexibility is essential, but common patterns and shared metaphors work wonders.
That's terribly short sighted. You can have a very clear architecture and code which cannot support the use cases required without almost starting from scratch.
You can also have the most flexible system ever designed, but if the rest of your team doesn't understand it then good luck implementing that required use cases
Sure, both extremes are shortsighted. I wasn't arguing for that, to be clear. I'm just saying clarity and ivory tower architecturing has little value if your system can't actually support the intended use case.
Which is what the person I was replying to said with "Code is for communicating with humans primarily, even though it needs to be run on a machine.". If the primary purpose is communication with other humans we wouldn't choose such awkward languages. The primary purpose of code is to run and provide some kind of features supporting use cases. It's really nice however if humans can understand it well.
That aphorism is completely incorrect. Code is primarily for communicating with a machine. If the purpose was to communicate with humans, we'd use human languages. Lawyers do that.
The code does also need to be understandable by other humans, but that is not its primary purpose.
So why do we have Java, Kotlin, Scala, Groovy, and Clojure, all targeting the JVM? And many such families?
The only thing that matter to the machine is opcodes and bits, But that's alien to human, so we map it to assembly. Any abstractions higher than that is mostly for reasoning about the code and share that with other people. And in the process we find some very good abstractions which we then embed into programming languages like procedure, namespacing, OOP, patterns matching, structs, traits/protocols,...
All these abstractions are good because they are useful when modeling a problem. The some are so good then it's worth writing a whole VM to get them (lisp homoiconicity, smalltalk's consistent world representation,...)
To allow you to write more readable and extensible code, that can solve real problems more effectively. Solving problems is the point of writing code.
Saying that reading code is the point of writing code is crazy, that's like saying the point of writing scripts is to read them, or the point of writing sheet music is to look at it.
No - the point of writing a script is to have it performed as a play, the point of writing music is to hear it and enjoy it. The point of writing code is to run it.
> All these abstractions are good because they are useful when modeling a problem.
Then what do you do after modeling the problem? You solve it! You run the program! Everything is in service to that.
> Solving problems is the point of writing code.
No one does it in isolation. The goal of having a common formal notation is for everyone to share solution unambiguously with each other. We have mathematical notation, choreographic notation, music notation, electric notation,... because when you've created something, you want to share it as best as possible to others. If not, you could just ship the end result and be done with it.
So no the point of writing music is not to hear it and enjoy it. To do so you just find an instrument and perform. You do not to do anything else. But to have someone else to do it, you can rely on their ear, their sights and their memory to pick things up. Or you just use the common notation to exchange the piece of music.
Because a secondary goal of code is communication with other humans. That means readability is still a highly valuable trait. Just not as valuable as the primary purpose.
I'd say code is a machine. Even code in a high-level language. Code machine is somewhat special because its details look like words. This misleads us into believing we can reason with these words. We cannot. We can use them to make the machine itself, but the only way to explain how it works is to write a normal technical description and the normal way to understand it should begin with reading that description. (There's no standard for a normal technical description though.)
While you are obviously right about it not being the primary purpose, here it seems the discussion is about designing for long term maintainability vs just running code.
The person he replied said code is primarily for communicating with other people. I'm not sure how else to interpret that than what is literally written down.
This reminds me of the concept of “forces” [0][1][2] in design-pattern descriptions. To decide for or against the use of a given design pattern, or to choose between alternative design patterns, one has to assess and weigh the respective forces in the particular context where it is to be used. They are called forces because they collectively pull the design in a certain direction. Just a different physics analogy versus “pressure”.
[0] https://www.cs.unc.edu/~stotts/COMP723-s13/patterns/forces.h...
[1] https://www.pmi.org/disciplined-agile/structure-of-pattern-p...
[2] Chapter 19 in “Pattern languages of program design 2”, ISBN 0201895277
I'm not sure I'd take design advice from someone who thought attr.ib and attr.s were a good idea. On the other hand he points out that DDD is a vacuous cult, which is true.
> I'm not sure I'd take design advice from someone who thought attr.ib and attr.s were a good idea
Can you elaborate?
that's a reference to my attrs library which is what data classes are based on. It originally used
as its main api (with `attr.attrs` and `attr.attrib` as serious business aliases so you didn't have to use it).That API was always polarizing, some loved it, some hated it.
I will point out though, that it predates type hints and it was an effective way to declare classes with little "syntax noise" which made it easy to write but also easy to read, because you used the import name as part of the APIs.
Here is more context: https://www.attrs.org/en/stable/names.html
I REGRET NOTHING
For what it’s worth, I was in the “loved it” camp.
(I’m the author of dataclasses, and I owe an immeasurable debt to Hynek).
if it's good enough for glyph, it's good enough for me
I’d call out patternitis and over-OOPification way before I’d criticize DDD. Yes, the latter can go too far, but the two former cases are abused on a much more frequent basis. Happily the pattern crazyness has died down a lot though.
DDD is nice especially in the first phase. All the concepts are actually rehashed from earlier principles. There’s nothing fully new there.
[flagged]