Error uploading tailored audience when assigning placeholder to data - are there problems with V0 or have things changed? Anyone else experiencing this issue?


Problems with tailored audience upload process today:

Account ID: 18ce53w624n

  1. Using v0 of ads api
  2. Process worked before
  3. No errors during a placeholder creation and data upload
  4. Error occurs in the last step when the placeholder is assigned to the data (auth headers removed):

Cannot get data from host: Request: 175 * Client out-bound request 175 > POST 175 > Response: 175 * Client in-bound response 175 < 400 175 < date: Mon, 04 Apr 2016 19:16:30 GMT 175 < server: tsa_a 175 < content-length: 206 175 < x-response-time: 35 175 < x-frame-options: SAMEORIGIN 175 < x-rate-limit-reset: 1459797445 175 < x-rate-limit-remaining: 297 175 < x-rate-limit-limit: 300 175 < x-transaction: 3764f63170ecc8e2 175 < strict-transport-security: max-age=631138519 175 < set-cookie: guest_id=v1%3A145979739008913982;; Path=/; Expires=Wed, 04-Apr-2018 19:16:30 UTC 175 < x-xss-protection: 1; mode=block 175 < x-content-type-options: nosniff 175 < content-disposition: attachment; filename=json.json 175 < x-connection-hash: 0afdf8013ff3739126af586c0231b70d 175 < x-access-level: read-write 175 < x-runtime: 0.027534 175 < content-type: application/json;charset=utf-8 175 < {“errors”:[{“code”:“INVALID_PARAMETER”,“message”:“Expected valid file path, got “/ta_partner/928374306/_nSbPBEtLhGiJFw.csv” for input_file_path”,“parameter”:“input_file_path”}],“request”:{“params”:{}}}

All lists getting stuck in processing

There haven’t been any issues with the Tailored Audience Changes endpoint, and I was just able to successfully make a similar ADD request.

I suspect that an incorrect input_file_path value may have been passed in, or if you were using a resumable TON upload, perhaps that upload was never successfully completed?



Here is more details re the problem above:

List ID: 15302

The list in question uploaded to the same ads account gets stuck in processing on the Twitter side. Will check again later today, but when these lists get into this state (post placeholder creation and data upload to Twitter) they tend to never complete processing and there is no way to reupload as the placeholder has already been created. Will post again this afternoon to confirm that processing has not completed. This is not the first list and we encountered the same issue on last Thursday using ads api. Worked prior to that and again on Friday but broke again yesterday.

Can you let us know what could be happening or what has changed for the upload process since the way we upload via ads api hasn’t changed since Q3 2015 so this is a net new problem for us.



After further investigation, we’ve found that if we add an artificial delay seems to result in success. Delayed trying to associate the placeholder with the data uploaded to make this work. Experimenting with different size of delay to see what yields the most success (least number of failures - stuck in processing)

Can you let me know if there is a caching process or some type of script that needs to complete running before the final association process between the placeholder and the data can work? How can we determine when to start the final step of the upload process - is there some sort of response through the api that we can monitor for?


We are also seeing audiences stuck in processing and then marked as “Audience Too Small” once again. It’s happened to us several times during the last few weeks. Last week, after around 48 hours, audiences were re-calculated and displayed their right size (and were marked as Ready).

Have you had any luck? Did your audience lists got marked as Ready after a couple of days?


We diagnosed the issue as a problem with the last step of the TA upload process - associating the placeholder with the data. It seems like this cannot always happen immediately after the data upload has completed. Tailored audiences that fail this association step show up as processing in Twitter Audience Manager but in fact will never complete processing since the association between the placeholder and the data will never happen so the actual match processing part never starts.

Don’t know why this is happening on V0, but the status in the Audience Manager is obviously incorrect. I haven’t seen the Audiences change status yet from Processing to Too Small - they were uploaded over the past few days, but I suspect that’s just another false error message like the original Processing message.

It’s unknown if Twitter API has implemented something new - caching, script or if it’s just that the deprecated API is now overloaded as they try to move resources over to V1, but this is definitely a change in behaviour. from last year.


@manueldelgado, @lkamitakahara - any luck? I’m seeing audiences stuck in processing again now (48 hours and no change to status). What version of the API are you using for TA?


We’re using V0 of the API. We have list upload tests from last night ~18 hrs ago that are still processing. Will check again later tonight to see if they finish within 24 hrs and will post here if they do or not. What version of the API are you using?


Also on V0, but also trying to figure out if that is the/an issue with TA before switching. Very frustrating.


In our case, our latest TAs have taken slightly over 48 hours to move to the Ready state (51-52 hours, to be precise). They first get into the “Audience too small” state 6-8 hours after upload, and they stay like that for nearly two days.

The API version does not seem to be relevant, as we have experienced the same timeframe even with audiences uploaded through the web interface.


I am experiencing the same lag in processing for lists uploaded through the api. I have 3 lists uploaded to my advertiser account that have been processing for 35+ hrs now.

Opened another topic as I need to get a response from Twitter re the lag:

Tailored Audience Stuck In Processing

Likewise we are still using version 0, and we are seeing processing times over 100 hours. do we think that bumping to the new version (at least for some of these syndication related endpoints) will potentially alleviate some of these issues?