Table.to_arrow_dataset

Table.to_arrow_dataset(max_results=None, *, variables=None, progress=True, batch_preprocessor=None) → pyarrow.dataset.Dataset

Returns a representation of the table as a pyarrow Dataset. Pyarrow datasets are backed by files on disk, rather than in memory, allowing you to load a table without contributing to memory usage. The file used by the dataset is stored in your operating system's temp directory, unless the REDIVIS_TMPDIR environment variable is set.

Parameters:

max_results : int, default None The maximum number of rows to return. If not specified, all rows in the table will be read.

variables : list<str>, default None A list of variable names to read, improving performance when not all variables are needed. If unspecified, all variables will be represented in the returned rows. Variable names are case-insensitive, though the names in the results will reflect the variable's true casing. The order of the columns returned will correspond to the order of names in this list.

progress : bool, default True Whether to show a progress bar.

batch_preprocessor : function, default None Function used to preprocess the data, invoked for each batch of records as they are initially loaded. This can be helpful in reducing the size of the data before being loaded into a dataframe. The function accepts one argument, a pyarrow.RecordBatch, and must return a pyarrow.RecordBatch or None. If you prefer to work with the data solely in a streaming manner, see Table.to_arrow_batch_iterator()

Returns:

pyarrow.dataset.Dataset

Last updated