Hi all,
first-timer here on the forum, hope I’m following the rules/not breaking any, please be gentle 
I’m playing with API v1.1, particularly with “GET favorites/list”. My purpose is to save a local copy of my liked tweets.
I’d like to be able to get pages in case the number of items in the response is above 200 and I was hoping cursoring would be an answer. Unfortunately there’s no mention for this feature in the endpoint’s help page neither cursor=X seem to work for me.
page=X works but, in that case, I’ve found no way to discover whether there’s next page or I’m on the last one.
- anyone knows whether there’s a way to make it work?
- any alternative methods?
- is there a similar, possibly better, way to achieve the same with the API v2.0?
Thanks,
Carmelo
The images are broken in the docs on this page (cc @andypiper ), but favourites/list is paginated like this: Working with timelines | Docs | Twitter Developer Platform (same way as GET statuses / user_timeline) with since_id and max_id
This will only go back 3200 tweets as far as i remember - i don’t think this changed. But your Twitter Data Download has likes.js that contains all of them, so you can merge the ids you get from that and keep updated with favourites/list
Hi @IgorBrigadir, and thanks for replying.
Starting from likes.js is an interesting approach which I hadn’t thought about. I’ll try that shortly.
As for the page you’ve linked, I’m aware of that and, in fact, I’m already using since_id. That works quite well when the script is run in an incremental fashion (e.g. daily/hourly fetches).
The main issue I’m trying to solve is when one needs to, for instance, start the process for the first time and the response is >200 in length. In that case a pointer to the next page would be ideal.
As I said, I’ll try with likes.js to work around the first run.
Curious as to whether “favorites/list” will make it to API v2.0 and if there’ll be any improvements.
Thanks,
Carmelo
Oh, sure - paging backwards you can use max_id without since_id (setting max_id to the smallest value to get the next oldest set of 200 tweets)
So to go backwards, the first request can have max_id and since_id left out, only specify count=200, you figure out what the lowest id is, set that to the max_id in the next request, get another 200, figure out the lowest id… repeat…
1 Like
That’s a great suggestion. I hadn’t thought of the combined use of max_id and since_id.
Many thanks!
Thanks for the note on the images here. I’ll add this to our docs list to fix up!
1 Like
Re V2, you are welcome to request improvements and additions via https://twitterdevfeedback.uservoice.com - thanks!
system
Closed
#8
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.