Cheap – Fast – Good | Implications For Your Martech Stack

Cheap – Fast – Good | Implications For Your Martech Stack

Pascal Hakim
5 September 2022
Sime Basioli Brkikonp0kq Unsplash
Sime Basioli Brkikonp0kq Unsplash

Cheap – Fast – Good | Implications For Your Martech Stack

Pascal Hakim
5 September 2022


Have you ever wished that some features of your martech stack were cheaper, better or faster? If so, you’re not alone. I have had many conversations about this with technical marketers across various businesses and industries.

What you may not know, however, is that there is often a technical reason as to why your software can’t deliver that feature cheaper, faster or better. 

Software designers and developers always have to make choices when creating software features. One of the main trade-offs is the location of the software feature in the cheap-fast-good triangle. 

What is the cheap – fast – good triangle?

A well-known approach in software development is “It can’t be cheap, fast and good. Pick two”. 

Developers trade-off between these elements each and every day. Generally speaking:

  • If the software is fast and good, it won’t be cheap 
  • If it’s fast and cheap, it won’t be good
  • If it’s good and cheap, it won’t be fast

This approach is represented by the cheap – fast – good triangle below. 

The triangle refers to the choices developers make between different ways of implementing a feature and the impact of these choices. 


Two elements must always be prioritised, as achieving all three at once is impossible.

The cheap – fast – good triangle & its implications for martech

Let’s take a look at how martech stacks use this triangle. 

Systems that are fast and good

Some systems in a martech stack are fast and good but not cheap. Developers typically create systems like this when there is a requirement for reacting to a customer extremely quickly, such as personalising a web page. Anything that takes over 20-100 milliseconds will be too slow, as the end user will also notice something happening on the page.

For this type of challenge, developers will pre-calculate as much as possible and load the result of those pre-calculations into a fast database like DynamoDB. You can tell this occurs when you need to ‘train’ an algorithm or when there are hard limits on the number of rows you can load into a system.

Systems like this are expensive to purchase due to the cost of fast storage required to do this. However, the expense is justified through the personalisation that will occur to power marketing efforts.

Systems that are cheap and fast

Some other systems in a typical martech stack are cheap and fast. Why would such a system be built? Because the additional quality of the expensive option isn’t worth the cost of building it. These systems have distinguishing features such as uncertainty on data size or readiness or inaccuracy in segment generation.

The most common example of this type of system is the free version of Google Analytics. It is common to see notices on some reports telling you that Google Analytics has sampled some of the data. You may see this if they extensively filter their data.

You may also find that DSPs have built systems that are cheap and fast. DSPs typically see very large amounts of data, as millions of bids are happening each second. 

To make matters even more complicated, DSPs often have to move segments into different data centres depending on the bidding location. They need to decide into which data centres they should load individual records when processing a segment. 

Many DSPs have made the decision that cheap and fast is a better way to build than fast and good, as it would be prohibitively expensive to build a solution that ingests and stores all data in real-time. 

You may see this when exporting segments to DSPs. The DSP may only choose to match parts of the segment, such as cookies requiring “user must have been seen at least twice in the past seven days.” This requirement helps DSPs to only load fresh data that is likely to be bid on, rather than aiming for completeness by loading the complete data history. This may cause some campaigns to bid inconsistently or in unexpected ways. 

It’s important to understand this to work with your DSP to ensure that the correct data is being used to achieve your desired results.

Systems that are cheap and good

Finally, we have systems that are both cheap and good but that are not fast. 

My first exposure to such systems in marketing was working with DMPs. They often have segment builders that can look at massive amounts of data, but they will take a while to process it and then make it available for activation. 

To enable this, developers will often use something like Amazon’s ElasticMapReduce (EMR) to process the data so that it can be processed completely and correctly at a low cost. A typical sign that this is happening is segments that need scheduling to refresh or UIs showing information that is potentially hours old. 

Interestingly, I’m finding that more and more martech vendors are implementing their segment builders using such techniques. This allows them to have richer, more complex segments, but the system will take longer to process them.

The benefit of building to this side of the cheap-fast-good triangle is that you will get a robust system that will process all data. It may take a long time to do so, however.


It’s advantageous for all of us to understand how our vendors have chosen to implement the different parts of their marketing stack and the matching opportunities and challenges. 

Once you know which features are fast, good or cheap, you can make better decisions about where your data should live and can help keep your technology costs down. All of which should help you achieve your business goals faster.

NewsRelated News