Skipping through result pages


#1

When scanning followers/list or other endpoints, the use of the cursor is pretty straight-forward and clear.
But I am trying to figure out, if there is an easy work-around, to skip a bit further than a page, basically by modifying the cursor variable.

I’ve read on older discussions how the cursor is a compiled number out of time and user id, used for btree indexing.

Why do I want this?

I need to extract a “somewhat random” sample of say 300 followers of someone’s account and run some statistics on. Instead of paging through all followers (sometimes they have up to 1M followers), I want to peek at certain pages only, pick 20 users from here, 20 users from that random page further back, and so forth. All until I’ve got a list of up to 300 followers (unique ids) so I can run my statistics on them and leave the API alone (instead of crawling ALL users and then discard almost all the data to get my sample of just 300 followers).

What I have tried (and failed at):

Receive a cursor for the second page, modify it by adding/subtracting random values (+50000, +1000), hoping it would seek a different location and possibly skip a few hundred or thousand users (I understand this delta might vary dramatically and over time).

All I really want is: Keep skipping/picking a page until I have 300 somewhat randomly picked users, or the cursor returns -1.

Is this doable? Thanks guys!