Data Mapping cfxdm - dm:head

dm:head: This cfxdm tag allows the user to fetch top 'n' rows from the queried data.

dm: head syntax:

  • n (optional). Specify the number of top rows that need to be listed. When this argument is not specified, by default it retrieves the top 10 rows.

This section explains how users can use a CSV file loaded into a dataset. This saved dataset will be used to explain how the dm: head function can be used to check the head of the stored dataset.

Download the incidents.csv file to the local machine as shown below using a standard web browser.

Example 1:

Default dm: head functionality is captured in this example.

Step 1: Download 'incidents.csv' to the AIOps RDA environment as shown below from the local file system.

Step 2: Upload the file 'incidents.csv' to AIOps studio using file-browser (as shown below)

Step 3: Add a new empty pipeline with the name "dm_head_example_1" as shown below and click the "Save" button (this step will create an empty pipeline and saves it to AIOps studio).

Step 4: Add the following pipeline commands into the empty pipeline text field that you have created in above Step 3.

You can copy the below code into your pipeline and execute that in your environment. ##### This pipeline loads incidents.csv file into AIOps Studio. ##### AIOps studio stores the data loaded from incidents.csv file ##### into local dataset named 'incident-summary'. ##### prints the data that was stored @files:loadfile filename = "incidents.csv" --> @dm:save name = 'incidents-summary' --> *dm:filter *

Step 5: Check the data from incidents.csv by executing the pipeline and verifying using inspect data as shown below (screenshot -1 & screenshot-2)

Note: There are 436 Rows stored in 'incidents-summary' dataset that was loaded from the incidents.csv file.

Step 6: Now, add the following additional pipeline code to use the dm: head function to the previously created pipeline from Step-4 as shown below (Edit and add the following pipeline code) and click verify to verify the pipeline code as shown below.

##### This pipeline loads incidents.csv file into AIOps Studio. ##### AIOps studio stores the data loaded from incidents.csv file ##### into local dataset named 'incident-summary'. ##### prints the data that was stored @files:loadfile filename = "incidents.csv" --> @dm:save name = 'incidents-summary' --> *dm:filter * --> @dm:head

Step 7: Click execute button and execute the pipeline. RDA will execute the pipeline without any errors (as shown below)

Step 8: RDA uses the dm head function to perform the selection of top '10' rows and prints to output as shown below. In addition, it displays the number of rows that were selected by default "dm: head" function that was run on the dataset stored off-of incidents.csv file.

Example 2:

dm: head functionality with and additional argument 'n (optional)' is captured in this example.

Repeat the step 'Step-1, Step-2, Step-3 as explained in Example -1.

In Step-3, add a new empty pipeline with the name "dm_head_example_2" as shown below and click the "Save" button (this step will create an empty pipeline and saves it to AIOps studio).

Step 4: Now, add the following additional pipeline code to use the dm: head function to the previously created empty pipeline from Step-3 as shown below (Edit and add the following pipeline code) and click verify to verify the pipeline code as shown below.

##### This pipeline loads incidents.csv file into AIOps Studio. ##### AIOps studio stores the data loaded from incidents.csv file ##### into local dataset named 'incident-summary'. ##### prints the data that was stored @files:loadfile filename = "incidents.csv" --> @dm:save name = 'incidents-summary' --> *dm:filter * --> @dm:head n = 20

Step 5: Click execute button and execute the pipeline. RDA will execute the pipeline without any errors (as shown below)

Step 6: RDA uses the dm head function to perform the selection of top '20' rows and prints to output as shown below. In addition, it displays the number of rows that were selected by default "dm: head" function that was run on the dataset stored off-of incidents.csv file.

Last updated