Question about POST request and how to get report for only promoted_tweets in a specified date range


#1

I have two questions:

  1. First question:
    In document Asynchronous Analytics: POST /stats/jobs/accounts/:account_id https://developer.twitter.com/en/docs/ads/analytics/api-reference/asynchronous.html#post-stats-jobs-accounts-account-id.
    I see 2 params “entity” and “entity_ids”. So, I want to confirm that:
  • If entity is CAMPAIGN => entity_ids is the list of campaign_id
  • If entity is LINE_ITEM => entity_ids is the list of line_item_id
  • If entity is PROMOTED_TWEET => entity_ids is the list of promoted_tweed_id
    Is it correct?
  1. Second question:
    I am currently get report for tweet. The main flow is below steps:

Step 1. GET accounts/:account_id/promoted_tweets
Step 2. Loop list of promoted_tweets which get at step 1.

  • Create job id using request Asynchronous Analytics: POST /stats/jobs/accounts/:account_id with params:
    . placement=“ALL_ON_TWITTER”
    . granularity=“DAY”
    . entity=“PROMOTED_TWEET”
    . metric_groups=“ENGAGEMENT,VIDEO,BILLING,MEDIA”
    . start_time
    . end_time
    . entity_ids = entity_ids (promoted_tweet_id)
  • Then, I will get job_url from responsed job_id using Asynchronous Analytics: GET /stats/jobs/accounts/:account_id

My question is:

  • As you see above flow, I loop all promoted_tweets. But in step 2 when I create job_id, I only want to get report data for tweets in specified range (start_time -> end_time).
  • The problem is the loop for all promoted_tweets will take so much time and so much request (rate limiting) for unnecessary promoted_tweets that has no report or not used any more in specified range (start_time -> end_time).
  • The problem will be solved if when I create job_id by Asynchronous Analytics: POST request, “entity_ids” param is not required when create job_id. But at present, it is required https://developer.twitter.com/en/docs/ads/analytics/api-reference/asynchronous.html#post-stats-jobs-accounts-account-id.
  • So, can you tell me the best way to get report for only promoted_tweets in specified range (start_time -> end_time) like we create job_id without “entity_ids” param.

Thank you every body.


#2

I may find the solutions for my second question with the reply by @brandonmblack at How to get Promoted Tweet Ids OR Line Item Ids which have some analytics data in stats apis for a particular date range. Thank you so much. It helps me a lot.

Beside that, if everyone has another way to filter list of promoted_tweet better, please share me. Thank you everyone.

Can anyone answer my first question?


#3

Unfortunately we don’t have built in support to automatically figure out a filtered list of promoted tweets that serve during a certain timeframe.

Generally, there are two key points to try to achieve:

  1. Don’t re-request data which hasn’t changed
  2. Don’t request data which has no chance of serving

For #1, once three or four days of time has passed, all metrics impression and spend should be effectively frozen and do not need to be refetched. For some types like conversions, you may need to continue to try to fetch it as those can continue to be updated for as long as the ‘conversion window’ is set as.

For #2, it’s common to try to write a simple filter like “is the start and end time of the line item associated with the promoted tweet active or could have been active during the time range” “is it actually paused or deleted and couldn’t have served”, and you can sometimes use the ‘sort_by’ option and ‘updated_at’ field as well to try to see which entities have been changed and make sure to check those first. If you have a certain scale of data it is necessary to have some sort of DB system on your end to work around rate limits, and be constantly syncing data with different priority queues while working around rate limits, but if the # of campaigns and accounts is smaller you will have more leeway and mainly rate limit issues should be solved as long as you are doing filtering as above. If you cannot ‘keep up’ with stats you can post more details about the scale of data being retrieved and we can give more advice about algorithm being used.


#4

@JBabichJapan Thank you for your response.