# Elasticsearch (v1)

## Introduction

This section explains how to add Elasticsearch data source,  ingest data into Elasticsearch and query the data using AIOps/RDA environment.

## Adding Elasticsearch as Datasource in 'RDA': <a href="#adding-appdynamics-as-datasource" id="adding-appdynamics-as-datasource"></a>

RDA's user interface is used to configure Elasticsearch data source. &#x20;

**Step 1:  Accessing RDA UI**&#x20;

Login into RDA's user interface using a browser.

**https\://\<rda-ip-address>:9998**

Under '**Notebook**', click on '**CFXDX Python 3**' box

![](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-MkhXVidFE4B967UBQxm%2F-MkhXbGKJW_w7Uqi-oKF%2Fimage.png?alt=media\&token=10b1afd0-ab8f-49d3-adeb-64a865a1a351)

**Step 2:  Adding Elasticsearch data source instance to RDA/AIOps**

In the '**Notebook**' command box, type **`botadmin()`** and **`alt (or option) + Enter`** to open the data source administration menu.

Click on the '**Add**' menu and under **Type** drop-down, select **`elasticsearch`**

![Adding Elasticsearch data source to RDA/AIOps ](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-Mkhb6BFRQreJ3hfvUmI%2F-Mkhg66bGNRcw6x00oUr%2FScreen%20Shot%202021-09-28%20at%2011.34.41%20AM.png?alt=media\&token=2bff9d74-c610-4116-a4f9-8dea2e46d0d3)

* **Type:** Datasource/Extension type. In this context, it is '**elasticsearch**'&#x20;
* **name**: Datasource/Extension label which should be unique within the RDA
* **Hostname:** Elasticsearch IP Address or FQDN/DNS name
* **Username**: User account that was created with 'read-only' permissions
* **Password**: User account's password

Click on '**Check Connectivity**' to verify the network access and credentials validity from RDA to Elasticsearch instance. Once it is validated, click on the '**Add**' button to add Elasticsearch as the data source

**Step 3:  Adding tag definition in RDA and associate with Elasticsearch index**&#x20;

Once the user completes **Step 2** and checks/validates connectivity from RDA to elasticsearch,   the user can now add/define a tag in RDA which maps to elasticsearch index that was created earlier.

In the '**Notebook**' command box, type **`botadmin()`** and **`alt (or option) + Enter`** to open the data source administration menu.

&#x20;Click on the '**Edit**' menu and under **Type** drop-down, select **'es/elasticsearch'** item that was created in step 2 (as shown in the below screenshot).

![RDA tag 'rda-to-elasticsearch' to Elasticsearch index 'rda\_to\_elasticsearch\_idx' mapping](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-MkiA0EwB2tyAc6s-Kk9%2F-MkiBAhsfBsPZ5UtIPvY%2FScreen%20Shot%202021-09-28%20at%201.55.13%20PM.png?alt=media\&token=ae814290-f087-4020-ab4c-f9ede6ca94a8)

Note: In the above RDA tags definition, RDA keeps track of tag (rda-to-elasticsearch) to that of elasticsearch index (rda\_to\_elasticsearch\_idx with unique id as idx).&#x20;

The code snippet is captured in the below code block.

```
- tag: rda-to-elasticsearch
  index: rda_to_elasticsearch_idx
  update:
    index: rda_to_elasticsearch_idx
    ids:
    - idx
```

**Note: Before performing step 3, make sure elasticsearch index (rda\_to\_elasticsearch\_idx) has been created ahead in elasticsearch instance and verified using standard tools (e.g. curl or postman)**

**Step 4:  Adding data using RDA and storing in Elasticsearch using the mapping that was created**&#x20;

Create a pipeline "***rda\_to\_elasticsearch\_example\_1***" and copy the below code into your pipeline and perform the rest of the steps in your environment. \
\
\&#xNAN;*`##### This pipeline creates couple of user names and ids using RDA/AIOps Studio.`*                          \
*`##### RDA uses the mapping that was created and stores the records into elasticsearch`*\
\
*`--> @dm:empty`* \
*`--> @dm:addrow idx = 1 & name = 'David' & lastname = 'Eiger' & email = 'deiger@hello.com'`*\
*`--> @dm:addrow idx = 2 & name = 'Emma' & lastname = 'Edge' & email = 'eedge@hello.com'`*\
*`--> @dm:addrow idx = 3 & name = 'John' & lastname = 'Seagul' & email = 'jseagul@hello.com'`*\
*`--> @dm:addrow idx = 4 & name = 'Peter' & lastname = 'Samuel' & email = 'psamuel@hello.com'`*\
*`--> @dm:addrow idx = 5 & name = 'Sean' & lastname = 'Taylor' & email = 'staylor@hello.com'`*\
*`--> #es:rda-to-elasticsearch`*<br>

![Pipeline added to injest data from RDA to Elasticsearch](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-MkiFItqDDH-EuuRh0ew%2F-MkihUgkQeM4RTQ9U5Bw%2FScreen%20Shot%202021-09-28%20at%204.20.34%20PM.png?alt=media\&token=3db8f6f6-ebc3-4ff5-9be8-ba40b95e582d)

**Step 5:  Verify the above-added pipeline using AIOps/RDA by selecting the 'Verify' button as shown in the below screenshot**

![Verify button will validate the syntax of pipeline](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-MkihsNYOAmNI_dveZ7l%2F-MkiiIQNjFbP91S91GeK%2FScreen%20Shot%202021-09-28%20at%204.24.16%20PM.png?alt=media\&token=12986404-a4eb-4a0a-b407-64ad4c94f58b)

**Step 6:  Execute the pipeline by selecting the 'Execute' button as shown in the below screenshot**

![Successful execution of pipeline RDA to Elasticsearch storing data](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-MkiidR7yYCnOwpo7em1%2F-MkilbfoPqT_RTxNB3XC%2FScreen%20Shot%202021-09-28%20at%204.38.41%20PM.png?alt=media\&token=283a5ba5-f77c-4244-a5cf-48bf76afcc45)

**Step 7:  Verify the data stored in Elasticsearch using RDA (and or using curl command).**

**Method A -- Using Curl command**

Step 1: Log in to the machine where Elasticsearch instance is running using putty or any other SSH tool

```
bash# ssh macaw@10.95.103.111 
```

Step 2: Once you log in, run the following curl command to validate the data stored

```
curl -X GET 'http://localhost:9200/rda_to_elasticsearch_idx/_search'?pretty=true
```

The above curl command will return the data as pretty formatted JSON output as shown below.

![Curl command returns all the records which are stored via execution of pipeline](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-MkiljQNTVqPjK5k18J4%2F-MkinFQ0NE8gKYh6yYCY%2FScreen%20Shot%202021-09-28%20at%204.45.56%20PM.png?alt=media\&token=26530a3b-bd7b-43a1-8c06-83bda2e94030)

**Method B -- Using  RDA pipeline**&#x20;

Step 1: Create a pipeline "***verify\_elasticsearch\_to\_rda\_data\_01***" and copy the below code into your pipeline and perform the rest of the steps in your environment. &#x20;

*`##### This pipeline verifies the data stored via RDA pipeline`*  \
*`#####`* \
*`--> @c:new-block`*\
*`--> #es:rda-to-elasticsearch`*

Step 2:  Verify the above-created pipeline using RDA/AIOps studio as shown in the below screenshot

![RDA queries data from Elasticsearch and outputs the stored records.](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-MkiqZidmN5IlUv8kuLq%2F-Mkiq_Hq_tGx5G51kQQ6%2FScreen%20Shot%202021-09-28%20at%205.00.22%20PM.png?alt=media\&token=cf3eee85-c536-4b43-b871-3e8b61bc5e21)

Step 3:  Execute and verify the output of the data using RDA/AIOps studio as shown in the below screenshot

![Execution of pipeline without any errors](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-MkiqzV2lBMz9mJGThcL%2F-MkirmgLME_AHUOmi0Ff%2FScreen%20Shot%202021-09-28%20at%205.05.36%20PM.png?alt=media\&token=e451caed-ffce-4631-b908-03c5064d659a)

![RDA prints the data as shown in the above screenshot](https://2978683539-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LhoMVYxiQlKXh6OxX98%2F-Mkis1URkQoGG1ShybwA%2F-Mkis3G-uTXuwaZgFBpp%2FScreen%20Shot%202021-09-28%20at%205.06.48%20PM.png?alt=media\&token=b4bc4575-0ad9-4032-b6fc-674548b587a3)

The above example walks through Elasticsearch integration with RDA using a simple inline dataset creation of users (name, last name, etc.). In addition, the datasets can come from files and/or other data sources like MySQL, etc. Users can explore other data sources using the above-explained steps.

Also, in the above example, a single Elasticsearch index 'idx' has been used to walk through the use case. Users can also extend or add additional indices to make a unique index based on the use case in context.
