Unable to update audience from TON bucket hashed email file

tailored-audiences
ton-upload
ton

#1

Hi,

I’m trying to use the API to create new audiences, and then update the audience from a list of emails.

I can successfully create a new audience via the API (POST to /accounts/{TwitterAccountId}/tailored_audiences), passing “name” parameter and “list_type” = EMAIL parameter. I get a CREATED status code back (and I can also see the audience via the website).

I take the “audienceId” from the data.id property in the response from creatnig the audience.

Next I POST a hashed list of email addresses to “/ton/bucket/ta_partner” - I get a CREATED status code in reponse, and I take the file location from the header “Location” parameter. I can then successfully GET this file from this endpoint https://ton.twitter.com/1.1/ton/bucket/ta_partner/{accountId}/{fileName} and it looks like the data I have uploaded.

Finally I POST to “/accounts/{TwitterAccountId}/tailored_audience_changes” and I get a CREATED staus code in reponse.
I am passsing the audienceId from the first request as the “tailored_audience_id” parameter, specifying “ADD” as “operation” parameter, passing the fileLocation from 2nd request as the “input_file_path” parameter. The response I get back has data.state property as “COMPLETED” which I am not expecting as I am uploading several thousand emails, and when I view the audience via the website it says “PROCESSING”.

If I query the endpoint https://ads-api.twitter.com/1/accounts/{accountId}/tailored_audience_changes then this also says that the update request has a state of “COMPLETED”.

I have waited several days, but the website interface still says “PROCESSING”, and the tailored_audience_changes endpoint still says “COMPLETED”.

All endpoints give me successful (CREATED) responses, and there are no errors or message that point to what is going wrong.

I have previously been able to create audiences and upload email files manually throught the website, and this has been successful.

My best guess is that the format of the file that I am uploading to the bucket is incorrect. But as far as I can tell I have followed all of the steps described in the correct format of “Tailored Audience File Data” here : https://dev.twitter.com/ads/audiences/file-data

I normalise each email address by lowercasing it , and trimming leading and trailing spaces.
I then hash the email address using SHA256, without a salt.
Then I stick a line feed ("\n") after each hashed email address, and join them all together. And I trim the final line feed.
This is the data that I POST to the “/ton/bucket/ta_partner” endpoint.

I have experimented with a number of changes to this file format , but none have worked so far:
I have tried using CRLF ("\r\n") instead of LF ("\n").
I have tried using unhashed email addresses.
I have tried not trimming the final LF from the end of the file.

Any help or pointers would be much appreciated. If you need any more information, or have any questions, please let me know.

Regards

Tim


#2

Hi there,

Occasionally the processing pipeline gets backed up and can take several days to process changes. Also, this will worsen if you call this endpoint a large number of times, so the worst case scenario is if you call it 1000 times it will take that much longer to work through all those files in a queue. It’s better to clamp the number of submissions you have against this endpoint. If during development, you called the endpoint quite a lot of times it’s possible it will take a while to work through them.

For file contents/upload…It’s probably easiest to try to upload with our ton_upload script to double check if it’s your hashing or ton upload part (this requires twurl authentication to be set up):

Example of running ton with tracing enabled:
./ton_upload --trace --mode upload --bucket {bucket name} --file /path/to/file

That output can basically be used to create another tailored audience placeholder and sent to tailored_audience_changes. The general flow you are doing seems right to me, so if after continued debugging you are still seeing problems please go ahead and whisper or DM the account and audience ID to me.

Thanks,

John


#3

Same response from other thread: I was able to confirm that there are some delays with the TA List processing right now so it’s likely nothing to do with the way you are uploading the file.


#4

Hi John,

I’ve been trying this for a while (over a week), and I never saw anything move out of “PROCESSING”.
I’ve deleted the old audiences as I’ve gone along, but I still have a set from 1,2 and 5 sep.

My twitter account id is 18ce53unwvv

The audience names I am experimenting with are below

volunteer | wales | 05 Sep 2016 10:26
MondayManualTest3
MondayTest2
MonManualtest1
volunteer | wales | 05 Sep 2016 09:36
volunteer | wales | 05 Sep 2016 09:32
volunteer | wales | 02 Sep 2016 14:18
test2
volunteer | wales | 02 Sep 2016 13:59
volunteer | wales | 02 Sep 2016 13:57
volunteer | wales | 02 Sep 2016 12:19
testaudience
volunteer | wales | 01 Sep 2016 16:54
volunteer | wales | 01 Sep 2016 16:41
volunteer | wales | 01 Sep 2016 16:36
volunteer | wales | 01 Sep 2016 16:34
volunteer | wales | 01 Sep 2016 15:43

(If you need the ids I can go through and get them). All the names with the pipes in (“volunteer….”) are created via the API.

The other audiences are created manually via the website.

All are marked “PROCESSING”.

In the latest uploads I have tried comma separating the hashed emails. But the early audiences were with hashed email and LF.

Thanks

Tim


#5

It’s unfortunately probable that even those audiences from Sep 1 are delayed for the same reason. I would recommend to wait another 48 hours and if the audiences do not appear by then, please let us know.


#6

Hi @totaljobsUK,

I’ve built a .net API that makes it easy to perform Ads API operations and Social Opinion automatically listens and generates Tailored Audiences using machine learning and AI based in Twitter signals/ your custom search criteria.

I’m looking for private beta users and wondered if this is something that could help you?

Regards,
Jamie.


#7

Hi Jon,

I’ve noticed that some of the audiences I set up manually via the website have now finished processessing and are “READY”

testaudience
test2
MonManualtest1
MondayTest2
MondayManualTest3

But the audiences I was trying to create via the API (at the same time) are still stuck “PROCESSING”

Cabin Crew | london | 07 Sep 2016 09:40
developer | kent | 06 Sep 2016 09:18
volunteer | wales | 05 Sep 2016 10:26
volunteer | wales | 05 Sep 2016 09:36
volunteer | wales | 05 Sep 2016 09:32
volunteer | wales | 02 Sep 2016 14:18
volunteer | wales | 02 Sep 2016 13:59
volunteer | wales | 02 Sep 2016 13:57
volunteer | wales | 02 Sep 2016 12:19
volunteer | wales | 01 Sep 2016 16:54
volunteer | wales | 01 Sep 2016 16:41
volunteer | wales | 01 Sep 2016 16:36
volunteer | wales | 01 Sep 2016 16:34
volunteer | wales | 01 Sep 2016 15:43

Are you able to have a look at what the issue is with them – is there something incorrect in the file format I have uploaded.
(I have tried a variety of alternatives w.r.t line endings, and hashing across these audiences).

Thanks

Tim


#8

Hi,

These audiences are still “stuck” in “processing”. Are you able to see anything on your side that would indicate what the problem is?

Thanks

Tim


#9

Hi,

Those audiences are still stuck.

I created 2 new audiences via the API on Friday morning, but they are still not processed.

c# developer comma | london | 09 Sep 2016 11:09

c# developer CRLF | london | 09 Sep 2016 11:04

Via the API I can :

  1.  create the audience
    
  2.  upload a file of hashed email addresses
    
  3.  call the API to update the audience with the list of email addresses
    

The API returns success messages at every point, but the audiences are always left in “processing”.

After step (2) where I upload a file to the API, I can then request the file from the API. If I then manually create an audience via the website and submit this file, then after 24 – 48 hours this works and the audience is processed, but I have not seen this work for audiences created via the API.

Is there any way to see what the problem might be with the 2 audiences above

c# developer comma | london | 09 Sep 2016 11:09

c# developer CRLF | london | 09 Sep 2016 11:04

or with these 2 audiences created the day before (which are still in processing)

“| london | 08 Sep 2016 09:37”

“cabin crew 1 CRLF | london | 08 Sep 2016 09:35”

Or with all of these audiences:

Cabin Crew | london | 07 Sep 2016 09:40
developer | kent | 06 Sep 2016 09:18
volunteer | wales | 05 Sep 2016 10:26
volunteer | wales | 05 Sep 2016 09:36
volunteer | wales | 05 Sep 2016 09:32
volunteer | wales | 02 Sep 2016 14:18
volunteer | wales | 02 Sep 2016 13:59
volunteer | wales | 02 Sep 2016 13:57
volunteer | wales | 02 Sep 2016 12:19
volunteer | wales | 01 Sep 2016 16:54
volunteer | wales | 01 Sep 2016 16:41
volunteer | wales | 01 Sep 2016 16:36
volunteer | wales | 01 Sep 2016 16:34
volunteer | wales | 01 Sep 2016 15:43

Thanks

Tim


#10

Hi,

Sorry that I can confirm the system is still experiencing instability with TA list processing. The team is doing everything possible to try to speed up recovery.

Thanks,

John


#11

I have a developer working on the chunked upload and after uploading the various 20mb chunks, the final location / string we get back is a 404.


#12

The general advice we have for debugging TON upload is to compare how files are being chunked versus our TON upload script or TON via Ruby SDK but it might be easiest to use https://github.com/twitterdev/ton-upload/blob/master/ton_upload . If you compare and still cannot see a difference in how the calls are being made feel free to submit a new thread with those details so it gets more attention.

For TA list processing in general, there are still a huge amount of files sort of stuck in a massive backlog, it might be possible for a newly submitted audience to get picked up earlier than one that has been stuck for more than a week so please consider trying to upload some fresh audiences and see if it goes any better.


#14

Hi @JBabichJapan, thanks for responding. We cloned the chunking mechanism to .NET directly from the Ruby script.

Another thing we are noticing is that despite submitting additional single uploads, when we run this:

https://ads-api.twitter.com/1/accounts/18ce549rjnj/tailored_audience_changes

The count appears to be stuck at 27? Any ideas?


#15

If you are use REPLACE when calling https://dev.twitter.com/ads/reference/post/accounts/%3Aaccount_id/tailored_audience_change it would be one reason why the # of changes being tracked could be stuck at 27, otherwise I would definitely expect the number to fluctuate as you add new audience files.

Generally because there is a large lag of time to size the audience for the first time (necessary because of how we need to filter users to only those active on our platform in the last few months), it might be more efficient to get the users into an audience and then do ADD to add new batches, but I would encourage you to experiment with what works best for you.

The processing queue for these files is shared between people uploading via API, partner tools, and ads.twitter.com directly, we are in progress to streamline the time taken to process updates as well as provide new tools for API partners, but for the short term whenever you are feeling your audience is taking abnormally long to process (and you’ve uploaded it within the last week, after we added some recent code changes) my advice is to post a new thread with link to your audience page and screenshot, or send same information directly to ads.twitter.com Ads Support who can try to help escalate your audience processing.


#16

@JBabichJapan Our dev was using ADD, we will try and upload smaller batches to see if that helps.

Understand you are working on streamlining the pipeline so will post a new thread with a link and screenshot.

Thanks for the escalation information.

Finally, are you able to DM me with any updates in terms of new tools for API partners?