Importing Data from a Parquet File

You can import data stored in a Parquet file, whether they have a table layout or not.

You can do it by either:

  • Dragging the Parquet file directly onto the Stage, creating an Import from Parquet File task.

  • Dragging an Import from Parquet File task onto the stage: this operation allows you to perform a more precise import operation, as you can:

    • Import a single file: only one file is imported, specifying the datasheet from which information will be taken.

    • Import multiple files together: in this case the files are concatenated to form a single table. This means that all files imported together must have the same structure.

A preview of the imported table is always displayed in the task.


Prerequisites

  • you must have created a flow;

  • if you are importing multiple files, they must have the same structure.


Procedure

  1. Drag an import From Text File task onto the Stage.

  2. Double click the task and open the menu task.

  3. Select whether you want to use a Saved or a Custom source.

  4. Choose from the drop down list if you want to import the file from your computer (Local) or from a remote connection.
    a. If you are importing from a Remote Filesystem, choose it from the list and then click on the pencil button to set the connection information required (only if you are using a Custom source). The tables are loaded in the Files tab.
    b. If you are using a Local Filesystem, click on the Select button and choose the path where the file is stored.

  5. Click on the Add new path button, located next to the Select button, to add new paths where other files to be imported are stored. You can add as many paths as you want.

  6. Click on the X button, located next to the Select button, to cancel the corresponding path.

  7. Click on the Delete all paths button, located under the Add new path button, to cancel all the inserted paths.

  8. Choose the resources to import by clicking the Select button in the Path 1 section.

  9. Click on the Add new path button, located next to the Select button, to add a new path for a new resource. You can add as many paths as you want.

  10. Click on the X button, located next to the Select button, to cancel the corresponding path.

  11. Click on the Delete all paths button, located under the Add new path button, to cancel all the inserted paths.

  12. Click on the Concatenation Type’s drop-down arrow to choose Inner or Outer concatenation.
    a. Inner concatenation final table includes only attributes that exist in both tables.
    b. Outer concatenation final table includes all the attributes, filling in any missing values if necessary.

  13. Click on the Match Column by drop-down arrow and choose if you want to match them by their Name or by their Position.

  14. Click on the Text Configuration tab and set the Parsing options and Import options, as shown in the table below.

  15. Save and compute the task.


Parsing and import options

Setting options

Description

Parsing options

Here you can set:

  • Missing string: enter the word you want to cancel from the dataset.

  • Text delimiter: select ' or if these symbols have been used as string delimiters. They will not be included in the imported file. For example, the string “apartment” will be imported as apartment. This option will remove all instances of text delimiters in the string, and not only the initial and closing symbols. The only exception to this rule will be if the symbol is proceeded by a backslash. For example, "ad\"cb" will be imported as ad"cb, while "ad"cb" will be imported as adcb.
    The data type for values with string delimiters is nominal, and this data type will not be altered by the removal of text delimiters. For example, “3” will be imported as 3, but will remain a nominal value, instead of being converted to an integer.

Import options

  • Start importing from line: the number of the line from which the importing operations will start.

  • Stop importing at line: the number of the line where the importing operations will end. Leave the value 0 if you want the whole dataset to be imported.

  • Column to be imported (empty for all): the number of columns to be imported. If left empty, all the columns will be imported.

  • Remove empty rows: select the check box if you want to remove the empty rows from the imported dataset.

  • Case sensitive: select the checkbox if you want the upper cases to be considered different from lower cases. Read below all the consequences on your data if this checkbox is selected.

  • Strip spaces: select this option if you want to remove spaces surrounding strings. For example, the string “ class “ will be imported as “class”.

  • Add an attribute containing filename: select this option to add an extra column with the name of the file to the dataset.

  • Remove empty columns: select the check box if you want to remove the empty columns from the imported dataset.

  • Compress white spaces: select it to remove extra consecutive spaces from within strings. For example the string "university    program" would be imported as "university program".

  • Use old computation data if the source file is not available: if selected, data from the previous computations will be used if the source table is not available.

  • Continue the execution fi the folder is empty: if selected, computation of the task continues, even if the selected source files are not available.

  • Turn off smart type recognition: if selected, prevents automatic recognition of data types, leaving the generic nominal type. This option is useful when manual identification is preferable, for example when there is the risk of a code being misinterpreted as a date.

  • Wait until the target file is present (poll every X seconds): if selected, Rulex polls the target file with the frequency specified (sleeptime) until it is available.


Focus on the Case Sensitive checkbox

We encourage you not to select the Case Sensitive checkbox, as it has a significant impact on the data analysis.

If the Case Sensitive checkbox is selected, the number of distinct values in a column can increase, causing a slight difference in the data analysis.

In fact, if we have two strings, 'Word' and 'word', they will be considered as two distinct values. This means that, if you write a function valid for the string 'word', it won't be valid for the string 'Word' too.

It might cause consequences also on attributes, because if we want to apply a function to the $"Word" attribute and we type $"word" in the formula bar, an error occurs during the computation process.