Using Logistic to solve Classification Problems

The Logistic task solves classification problems according to the logistic regression approach, i.e. by approximating the probabilities associated with the different classes through a logistic function:

 

 

where

  • c is the number of output classes

  • j is the index of the class, ranging from 1 to c−1,

  • d is the number of inputs and

  • 𝓌ji is the weight for class j  and input i .

The probability for the c -th class is obtained as 

The optimal weight matrix 𝓌ji is retrieved by means of a Maximum Likelihood Estimation (MLE) approach that makes use of the Newton-Raphson procedure to find the minimum of the log-like function.

The output of the task is the weights matrix 𝓌ji, which can be employed by an Apply Model task to perform the Logistic forecast on a set of examples.


Prerequisites

Additional tabs

  • the Results tab, where statistics such as the execution time, number of attributes etc. are displayed.

  • the Coefficients tab, where the coefficients matrix weight vector 𝓌ji relative to the Logistic approximation is shown. Each row corresponds to an output class whereas columns are relative to a single input attributes.


Procedure

  1. Drag the Logistic task onto the stage.

  2. Connect a task, which contains the attributes from which you want to create the model, to the new task.

  3. Double click the Logistic task. 

  4. Drag and drop the input attributes, which will be used for regression, from the Available attributes list on the left to the Selected input attributes list.

  5. Configure the options described in the table below.

  6. Save and compute the task.

Logistic options

Parameter Name

Description

Selected input attributes (drag from available attributes)

Drag and drop the input attributes you want to use to form the model leading to the correction classification of data.

Normalization of input variables

The type of normalization to use when treating ordered (discrete or continuous) variables.

Possible methods are:

  • None: no normalization is performed (default)

  • Normal: data are normalized according to the Gaussian distribution, where μ is the average of and σ is its standard deviation: 

     

  • Minmax [0,1]: data are normalized to be comprised in the range [0,1]:

     

  • Minmax [-1, 1]: data are normalized to be included in the range [-1, 1]:

     

Every attribute can have its own value for this option, which can be set in the Data Manager task. These choices are preserved if Attribute is selected in the Normalization of input variables option; otherwise any selections made here overwrite previous selections made.

 

Normalization types

For further info on possible types see Advanced Attributes' Management in the Attribute Tab.

Output attribute (response variable)

Select the attribute which will be used to identify the output.

P-value confidence (%)

Specify the value of the required confidence coefficient.

Weight attribute

If specified, this attribute represents the relevance (weight) of each sample (i.e., of each row).

Regularization parameter

Specify the value of the regularization parameter which is added to the diagonal of the matrix.

Initialize random generator with seed

If selected, a seed, which defines the starting point in the sequence, is used during random generation operations. Consequently, using the same seed each time will make each execution reproducible. Otherwise, each execution of the same task (with same options) may produce dissimilar results due to different random numbers being generated in some phases of the process.

Aggregate data before processing

If selected, identical patterns are aggregated and considered as a single pattern during the training phase.

Append results

If selected, the results of this computation are appended to the dataset, otherwise they replace the results of previous computations.


Example

The following example uses the Adult dataset.

Description

Screenshot

  • After having imported the dataset with the Import from Text File task and splitting the dataset into test and training sets (30% test and 70% training) with the Split Data task, add a Logistic task to the flow and configure the following options:

    • Normalization for input variables: None

    • Output attribute: Income

    • Drag and drop all remaining attributes onto the Selected input attributes list.

Leave the remaining options with their default values and compute the task.

Once computation has completed, in the coefficients tab we have a single row containing the coefficients relative to the output class <=50K.

In a case with c>2 output classes, the weight matrix contains c−1 rows each containing the coefficients relative to an output class. 

The probability of the last class is obtained as 

The Results tab contains a summary of the computation.

Then add an Apply Model task to forecast the output associated with each pattern of the dataset. 

To check how the model built by Logistic model has been applied to our dataset, right-click the Apply Model task and select Take a look.

Two result columns have been added:

  • The pred(income) column contains the output forecast generated by the Logistic model.

  • The err(income) column contains the error, which corresponds to the difference between the predicted output and the real one. If the actual output is missing, this field is also left empty.