A/B testing or split testing holds back a part of the audience to test how elements have effect on the audiences. In this type of testing, there are two versions of a piece of content and only one variable is changed, exposed to an audience with similar size. It has three main purposes:

  • To determine which communication works better.
  • To learn what effects various communication elements have on various target audiences in various situations.
  • To provide information that will help in the “decision making” process.

Although this type of testing is helpful to understand which communication works better, it is impossible to know which one works best because there are many elements that we are not able to control in the experiment. Some of the main problems that are found are the following:

  • Unique events that can throw away data, for example, the tweet of an influencer that shares your “A” communication, but not your “B” communication.
  • Ending tests too quickly, so we are not able to gather enough data to make a significant argument or conclusion.
  • Target audiences and segmentation, because as long as an ad is performing on Google for a specific market, Google won’t even reach a different one.
  • Believing the results: tests should be repeated in order to create more accurate results, but we don’t always have the time or the resources to do it. With this, we encounter the issue of getting evidence but not necessarily good or true.

There are many other problems and elements that we can’t control during A/B testing, however, it is important to remember that no testing means biased opinions and testing provides us with the opportunity to consolidate information vs gathering random information.