Hi,
I am trying to get all tweets and their associated user fields (username, name,…etc) that match a certain query using search_recent_tweets. I tried to use pagination and flattening but it only flattens the tweets (not the user fields). So I am trying to implement something like next_token in get_user_tweets but search_recent_tweets doesn’t have pagination_next? How can I do this?

This is the code I am trying to use (which gives me only 100 tweets as pagination_next doesn’t exist for search_recent_tweets)

import pandas as pd
import tweepy

BEARER_TOKEN = ''
api = tweepy.Client(BEARER_TOKEN)

response = api.search_recent_tweets(query = 'myquery',start_time = '2022-09-19T00:00:00Z', end_time = '2022-09-19T23:59:59Z',
                              expansions = ['author_id'],
                              tweet_fields = ['created_at'],
                              user_fields = ['username','name'],
                              max_results = 100)
tweet_df = pd.DataFrame(response.data)
metadata = response.meta
users = pd.concat({k: pd.DataFrame(v) for k, v in response.includes.items()}, axis=0)
users = users.reset_index(drop=True)
users.rename(columns={'id':'author_id'}, inplace=True)
all_tweets = tweet_df.merge(users)
next_token = metadata.get('next_token')
while next_token is not None:
    response = api.search_recent_tweets(query = 'myquery',start_time = '2022-09-19T00:00:00Z', end_time = '2022-09-19T23:59:59Z',
                            expansions = ['author_id'],
                            tweet_fields = ['created_at'],
                            user_fields = ['username','name'],
                            pagination_token=next_token,
                            max_results = 100)
    tweet_df = pd.DataFrame(response.data)
    metadata = response.meta
    users = pd.concat({k: pd.DataFrame(v) for k, v in response.includes.items()}, axis=0)
    users = users.reset_index(drop=True)
    users.rename(columns={'id':'author_id'}, inplace=True)
    tweets = tweet_df.merge(users)
    all_tweets.append(tweets)
    next_token = metadata.get('next_token')
    
all_tweets

Try using twarc GitHub - DocNow/twarc: A command line tool (and Python library) for archiving Twitter JSON for this, twarc will automatically paginate results, and twarc-csv can then flatten the data for both tweets and users and give you a dataframe or csv back, either in command line or as a library Examples of using twarc2 as a library - twarc