Periodic Adjustments of Data from Statistics API Endpoints


#1

We are implementing a reporting architecture to pull stats down for our campaigns.

Many advertising systems have the notion of periodic adjustments to reporting data to account for discrepancies. It is common for upstream & downstream systems to compare numbers and sometimes manually adjust the numbers via batch updates.

What process does the Twitter Ads Reporting team have in place to periodically adjust numbers? Do adjustments occur for all the stats endpoints? Would it make sense for us to implement a periodic scanback to account for adjustments to data?

As API developers, if we understand when adjustments are made on your end and how frequently, we can optimize the way we account for them on our end.


#2

Thanks for asking, Chris!

Broadly, we suggest the following to guide pullback of historical data:

https://dev.twitter.com/ads/analytics/best-practices

Depending on the level of granularity you’re looking for, you’d need to weigh the granularity vs. the cost of calls. It’s likely that daily is sufficient, which should help with volume of data and pull-back.

We also do not see much change after 30 days, so we suggest to keep it to that scope. 7 days is generally sufficient because metrics are attributed in that timeframe.

The above is based on recommendations from our eng team, and as v1 rolls out we will adjust our recommendations accordingly.

Does that help?


#3

Yes, it does! Thanks for the info.