This request looks like it might be automated

restapi
bots

#1

My bot got muzzled after repeating twice the same series of messages. It is clearly frustrating to have to stop development and “try again later” just because Twitter offers no development sandbox. Additionally, even in production, messages might be repetitive (think of warning or notifications).

So, can I conclude that Twitter is not yet ready for prime time and code my bots on, for example, FB Messenger or Telegram?


#2

We’re prioritising bot-like features in the Direct Messages API at the moment, and I know you’d previously said that is what you’ve been looking to use. However, we do still have automation rules - we issued a policy clarification here recently - and our machine learning antispam systems will detect some of these kinds of repeated content messages.

Are these repeated identical message to the same user, or different ones? Do they include (for example) any URLs?

You’re welcome to choose the platform of your preference, and we apologise if you’re finding our systems to be problematic. This is good feedback as we iterate on this aspect of the API.


#3

The account was muzzled after 2 different users received the same sequence of messages, no message contained urls. I think that Twitter antispam ML should be a little more lenient especially when a bot is not yet in production (it has 4 followers), otherwise it can be very frustrating. Of course my preference goes to Twitter, but if I have to take forced breaks, I see no other option than going elsewhere.


#4

Thanks for the feedback. That’s a good example of a reason we could look into adjusting this for new bot development.


#5

That would be great: some additional factors that could go into the ML training dataset are, for example, the initiator of the conversation and the delay between bot/user requests/replies. Spam conversations are - I suppose - initiated by the bot whereas conversations initiated by a user are less likely to be unexpected. Also, test conversations are usually very fast, as the goal is to test the conversation flow under different assumptions. The number of followers and the user ids that most often interact with the bots would help as well. I think that a Bayesian classifier would do the job well enough.