Concurrent async stats jobs overwrite each other

v1
ads
analytics
api

#1

Hi there,

I have come across a peculiar behavior that seems like a bug. If I submit 2 requests to the async endpoint in quick succession (so they will be running concurrently) the data of the 1st job is overwritten by the data of the second job.

For example if you run the following curls (oAuth data removed obviously) the data payloads will be the same across jobs when they complete.

Thank you for the assistance!

curl --request ‘POST’ ‘https://ads-api.twitter.com/1/stats/jobs/accounts/18ce53w5her’ --data ‘amp%3Bend_time=2015-12-04T23%3A00%3A00Z&amp%3Bentity=CAMPAIGN&amp%3Bgranularity=TOTAL&amp%3Bmetric_groups=BILLING&amp%3Bplacement=ALL_ON_TWITTER&amp%3Bsegmentation_type=SIMILAR_TO_FOLLOWERS_OF_USER&amp%3Bstart_time=2015-11-30T23%3A00%3A00Z&entity_ids=3og6d’ --header ‘Authorization: OAuth oauth_consumer_key="<SECURE_DATA_HERE>", oauth_nonce="<SECURE_DATA_HERE>", oauth_signature="<SECURE_DATA_HERE>", oauth_signature_method=“HMAC-SHA1”, oauth_timestamp="<SECURE_DATA_HERE>", oauth_token="<SECURE_DATA_HERE>", oauth_version=“1.0”’ --verbose

curl --request ‘POST’ ‘https://ads-api.twitter.com/1/stats/jobs/accounts/18ce53w5her’ --data ‘amp%3Bend_time=2015-12-04T23%3A00%3A00Z&amp%3Bentity=CAMPAIGN&amp%3Bgranularity=TOTAL&amp%3Bmetric_groups=BILLING&amp%3Bplacement=ALL_ON_TWITTER&amp%3Bsegmentation_type=LOCATIONS&amp%3Bstart_time=2015-11-30T23%3A00%3A00Z&entity_ids=3og6d’ --header ‘Authorization: OAuth oauth_consumer_key="<SECURE_DATA_HERE>", oauth_nonce="<SECURE_DATA_HERE>", oauth_signature="<SECURE_DATA_HERE>", oauth_signature_method=“HMAC-SHA1”, oauth_timestamp="<SECURE_DATA_HERE>", oauth_token="<SECURE_DATA_HERE>", oauth_version=“1.0”’ --verbose


#2

Hi,

Basically we don’t believe this is possible due to how the system was implemented. Is it possible that when you are viewing the file for what you believe is the second job, it is actually opening the file for the first job or some scenario like that?

If you could please double check the data, and if you still are sure there is a problem, we would need as much detail as possible like why the data is definitely wrong and best would be the full set of steps taken to schedule and retrieve the job data.

Thanks,

John


#3

Hi John,

Yes after some more testing the problem was a mixup of parameters that caused the overwriting coming off the queue. The data is now coming through OK!
Thank you for your response.