The whole furor around cookie deletion rates (1st and 3rd party) has been brewing in the Web Analytics scene for a while, so likewise, I've been meaning to comment on it here for a while. How the cookie crumbles is a topic I've been fielding repeated questions about for a solid year or more, and in that time for my clients I've maintained that, relatively speaking, the matter has been a bit of a media-fueled tempest in a teacup.
It was only a matter of time until someone published a thorough discussion of why. If you didn't catch the final version of Stone Temple's very intriguing 2007 Web Analytics Shootout, hit it up and watch for these gems:
...some of the largest sources of error, such as those that relate to session management and counting, do cause a variance in the traffic results returned by the packages, but they do not affect the ability of the program to minitor the key performance indicators (KPIs) of your site.
Even if an analytics package is measuring the behavior of only 80% of their users, it remains highly relevant and valuable data. By contrast, the traditional print industry relies on subscriber surveys, and feels lucky if they get 20% response. They would die for data on 80% of their customers.
The fact is that some percentage of the questionable data is bad, and some of it may actually relate to a real user. The package that throws out too little gets skewed on one direction, and the package that throws out too much gets skewed a little bit in the other direction.
Neither of these changes the ability of these packages to measure trends, or to help you do sophisticated analysis about what users are doing on your site.
Thanks, Eric. I couldn't have summed it up better myself, especially in consideration of how cookie deletion and/or blocking is just one "error" type that can occur.
Now, critics of this stance cite the revenue models of advertising networks, specifically those that make their money by serving impressions (typically pricing by CPM). My opinion there though, is that they have essentially the same calibration tasks to embrace - as part of routine maintenance - as their customers do. The issue is not that in there is no perfect number. It's human nature to seek out impurities, and do things like brand all forms of hypocrisy band along the way and other short-sightedness. In the works of online analytics though, those urges have their place but do have to be kept in check. In this game, yes, actionable guidance is real and essential. As far as hard numbers go however, there is no "final answer."
The real issue is more that it falls to publishers and advertisers to be mutually prepared to haggle it out if/as their numbers don't match up and it's invoicing and/or contract renewals (a.k.a. "remind us again what we pay you people for") time. Sink your teeth in and take a bite; it's part of the fun. Always be tracking and auditing your own end via multiple tools i.e. disparate reporting sources if/as it really, really matters.
- If you're a publisher, advertisers should want to keep your business. If/as you've a good case to make, do so... or at least make sure someone (your agency if you have one) is watching over your house attentively. It's part of what any involved intermediaries should be there for (and why it's good to have an agnostic middleman in the mix sometimes).
- On the other hand if what puts food on your table is charging by Impressions (if you're an advertiser), build maximum deviations into your pricing model as a safety net. It's not like publishers don't need you to maintain visibility.
If the relationship's been working out and neither party's made itself a rep with the other for being a choad, problem solved.
I'm using the report's publication as a chance to spew scream cookies about this a little, as one of the most common traps I see people fall into in this space, is habitually obsessing over small incremental accuracies too much, while not monitoring longer-term trends, too much. Some marketers lack patience to pay enough attention to larger trends, and/or worse, fail to invest in positioning to monitor them - caveats and all - from the moment they hit the Web. Among the biggest sacrifices that gets made when problems like these happen is it prevents marketers from getting around to ad-side and/or site-side conversion optimization: Tasks like A/B - or even better - Multivariate testing, for examples. I'm not saying the issue isn't important. I'm saying it is, and things can be done about it, but beyond noting that it's not the end of the world, marketers need to check their heads re. expectations. Different vendors / tools calculate sessions - hence metrics like Visits, Unique Visits and others - differently. Some natural variance in between is going to happen inherently, i.e. even if/when implementations happen flawlessly.
So I've given enough of a spoiler on the report and done my soap-boxing. For more of its details, data and otherwise, go read it. In consideration of top-tier vendors it's unfortunately sparse on Omniture data, but to be fair, aside from their being a market leader I'm picking on that a little extra just as a relatively deep and frequent Omniture user, myself. Anyway... that's good enough for me.