I refer to this as the Oracle problem. In the early ‘90s, if you were using a database to manage things like payroll and inventory, you needed a *big* server. Paying for an expensive database was a good idea because you really needed to get the last bit of efficiency out of the system.
By the early 2000s, your company’s database might have doubled in size (7% annual growth), but computers were 64x faster for the same price. Now you could (and a lot of companies did, but shouldn’t) handle the same workload in Access on a moderately good desktop. Another decade later and they could buy three cheap Arm SBCs for under $100 and set up Postgres with replication and handle the same workload without noticeably spiking the CPU usage. Not only did the hardware cost drop to almost nothing, the cost of an expensive database went from a rounding error in the accounting to the vast majority of the cost.