File Formats#
Arrow & Feather#
Loading and saving BeForRecord data in Apache Arrow format
To use this module, please install the python library pyarrow.
Convert a BeForRecord instance to a pyarrow.Table |
|
|
Create a BeForRecord instance from a pyarrow.Table. |
Convert a BeForEpochs instance to a pyarrow.Table. |
|
|
Create a BeForEpochs instance from a pyarrow.Table. |
XDF#
Converting XDF streaming data to BeForData
Use the library pyxdf to read XDF files.
|
Create a BeForRecord object from XDF stream data. |
|
Convert XDF channel data to a pandas DataFrame. |
|
Retrieve metadata information for a specific channel from XDF streaming data. |
CSV#
Support for reading compressed CSV files with embedded comments
|
Reads a CSV file, supporting comments and compression. |
befordata.arrow#
- record_to_arrow(rec)#
Convert a BeForRecord instance to a pyarrow.Table
The resulting Arrow table will include schema metadata for sampling rate, time column, sessions, and any additional metadata from the BeForRecord.
- Return type:
Table
- Parameters:
- recBeForRecord
The BeForRecord instance to convert.
Examples
>>> from pyarrow import feather >>> tbl = record_to_arrow(my_record) >>> feather.write_feather(tbl, "filename.feather", ... compression="lz4", compression_level=6)
- arrow_to_record(tbl, sampling_rate=None, sessions=None, time_column=None, meta=None)#
Create a BeForRecord instance from a pyarrow.Table.
Reads metadata from the Arrow schema to reconstruct the BeForRecord’s sampling rate, time column, sessions, and meta dictionary.
- Return type:
- Parameters:
- tblpyarrow.Table
Arrow table to convert.
- sampling_ratefloat, optional
Override the sampling rate from metadata.
- sessionslist of int, optional
Override the sessions from metadata.
- time_columnstr, optional
Override the time column from metadata.
- metadict, optional
Additional metadata to merge with Arrow metadata.
- Raises:
- TypeError
If tbl is not a pyarrow.Table.
- RuntimeError
If no sampling rate is defined.
Examples
>>> from pyarrow.feather import read_table >>> dat = arrow_to_record(read_table("my_force_data.feather"))
- epochs_to_arrow(ep)#
Convert a BeForEpochs instance to a pyarrow.Table.
The resulting Arrow table will contain both the sample data and the design matrix. If baseline adjustment was performed, the baseline values are included as an additional column. Metadata for sampling rate and zero sample are stored in the schema.
- Return type:
Table
- Parameters:
- recBeForEpochs
The BeForEpochs instance to convert.
Examples
>>> from pyarrow import feather >>> tbl = epochs_to_arrow(my_epochs) >>> feather.write_feather(tbl, "my_epochs.feather", ... compression="lz4", compression_level=6)
- arrow_to_epochs(tbl, sampling_rate=None, zero_sample=None, meta=None)#
Create a BeForEpochs instance from a pyarrow.Table.
Reads metadata from the Arrow schema to reconstruct the BeForEpochs’ sampling rate and zero sample. Extracts baseline values if present.
- Return type:
- Parameters:
- tblpyarrow.Table
Arrow table to convert.
- sampling_ratefloat, optional
Override the sampling rate from metadata.
- zero_sampleint, optional
Override the zero sample from metadata.
- metadict, optional
Additional metadata to merge with Arrow metadata.
- Raises:
- TypeError
If tbl is not a pyarrow.Table.
- RuntimeError
If no sampling rate is defined.
Examples
>>> from pyarrow.feather import read_table >>> dat = arrow_to_epochs(read_table("my_epochs.feather"))
befordata.xdf#
- before_record(xdf_streams, channel, sampling_rate)#
Create a BeForRecord object from XDF stream data.
Returns a BeForRecord object containing the channel data, sampling rate, time column name, and channel metadata.
- Return type:
- Parameters:
- xdf_streamslist of dict
List of XDF streams as returned by pyxdf.load_xdf.
- channelint or str
Channel index (int) or channel name (str) to extract data from.
- sampling_ratefloat
Sampling rate of the channel data.
- Raises:
- ValueError
If the specified channel cannot be found.
Examples
>>> from pyxdf import load_xdf >>> streams, header = load_xdf("my_lsl_recording.xdf") >>> rec = xdf.before_record(streams, "Force", 1000)
- data(xdf_streams, channel)#
Convert XDF channel data to a pandas DataFrame.
Returns a Pandas DataFrame containing time stamps and channel data, with columns named according to the global TIME_STAMPS variable and channel labels.
- Return type:
DataFrame
- Parameters:
- xdf_streamslist of dict
List of XDF streams as returned by pyxdf.load_xdf.
- channelint or str
Channel index (int) or channel name (str).
- Raises:
- ValueError
If the specified channel cannot be found.
- channel_info(xdf_streams, channel)#
Retrieve metadata information for a specific channel from XDF streaming data.
Returns a dictionary containing channel metadata such as name, type, channel count, channel format, clock times, and clock values.
- Return type:
Dict
- Parameters:
- xdf_streamslist of dict
List of XDF streams as returned by pyxdf.load_xdf.
- channelint or str
Channel index (int) or channel name (str).
- Raises:
- ValueError
If the specified channel cannot be found.
Globals#
To change the column name for time stamps in the dataframe, modify the global string
variable befordata.xdf.before.TIME_STAMPS
(default: "time"
). Set this variable
to your preferred column name before loading data.
befordata.csv#
- read_csv(file_path, columns=None, encoding='utf-8', comment_char='#')#
Reads a CSV file, supporting comments and compression.
This function reads a CSV file and returns a Pandas DataFrame, optionally compressed with .xz or .gz, and extracts any comment lines (lines starting with comment_char). The comments are returned as a list, and the CSV data is loaded into a pandas DataFrame.
- Return type:
Tuple
[DataFrame
,List
[str
]]- Parameters:
- file_pathstr or pathlib.Path
Path to the CSV file. Supports uncompressed, .csv.xz, or .csv.gz files.
- columnsstr or list of str, optional
Column name or list of column names to select from the CSV. If None, all columns are read.
- encodingstr, default “utf-8”
File encoding to use when reading the file.
- comment_charstr, default “#”
Lines starting with this character are treated as comments and returned separately.