Not able to set type as "number" of filter

Hi Team,

I have followed the steps mentioned in Elastic stack essentials to deploy an elk stack, but i am not able to set the "apache2.access.body_sent.bytes" filter type as number due to which i m not able to convert it in format of bytes/kilobytes.

It looks like the "number" option is missing from the list

user_278636_5c290bd63f66c.png_800.jpg

Following is apache.conf file for logstash, its the same file which the instructor has downloaded and used in the configuration.


input {
beats {
port => 5044
}
}
filter {
if [fileset][module] == "apache2" {
if [fileset][name] == "access" {
grok {
match => { "message" => ["%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \[%{HTTPDATE:[apache2][access][time]}\] \"%{WORD:[apache2][access][method]} %{DATA:[apache2][access][url]} HTTP/%{NUMBER:[apache2][access][http_version]}\" %{NUMBER:[apache2][access][response_code]} %{NUMBER:[apache2][access][body_sent][bytes]}( \"%{DATA:[apache2][access][referrer]}\")?( \"%{DATA:[apache2][access][agent]}\")?",
"%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \\[%{HTTPDATE:[apache2][access][time]}\\] \"-\" %{NUMBER:[apache2][access][response_code]} -" ] }
remove_field => "message"
}
mutate {
add_field => { "read_timestamp" => "%{@timestamp}" }
}
date {
match => [ "[apache2][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[apache2][access][time]"
}
useragent {
source => "[apache2][access][agent]"
target => "[apache2][access][user_agent]"
remove_field => "[apache2][access][agent]"
}
geoip {
source => "[apache2][access][remote_ip]"
target => "[apache2][access][geoip]"
}
}
else if [fileset][name] == "error" {
grok {
match => { "message" => ["\[%{APACHE_TIME:[apache2][error][timestamp]}\] \[%{LOGLEVEL:[apache2][error][level]}\]( \[client %{IPORHOST:[apache2][error][client]}\])? %{GREEDYDATA:[apache2][error][message]}",
"\[%{APACHE_TIME:[apache2][error][timestamp]}\] \[%{DATA:[apache2][error][module]}:%{LOGLEVEL:[apache2][error][level]}\] \[pid %{NUMBER:[apache2][error][pid]}(:tid %{NUMBER:[apache2][error][tid]})?\]( \[client %{IPORHOST:[apache2][error][client]}\])? %{GREEDYDATA:[apache2][error][message1]}" ] }
pattern_definitions => {
"APACHE_TIME" => "%{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}"
}
remove_field => "message"
}
mutate {
rename => { "[apache2][error][message1]" => "[apache2][error][message]" }
}
date {
match => [ "[apache2][error][timestamp]", "EEE MMM dd H:m:s YYYY", "EEE MMM dd H:m:s.SSSSSS YYYY" ]
remove_field => "[apache2][error][timestamp]"
}
}
}
}
output {
elasticsearch {
hosts => localhost
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}





  • post-author-pic
    Myles Y
    12-31-2018

    Hi Sona, while this may be the same config I used in the course, part of the steps for setting up the pipeline is using the filebeat-setup command to push the index template for this data to Elasticsearch beforehand so that everything has the correct data type. 


    Kibana can set formatting types for a given datatype but it cannot change the datatype altogether. You either need to ensure the filebeat template is loaded into Elasticsearch before indexing or you can change your Logstash config to index that field as an integer like so:

    Change "%{NUMBER:[apache2][access][body_sent][bytes]}" to "%{NUMBER:[apache2][access][body_sent][bytes]:int}"

    In either case, you will need to change the name of your index after making the datatype change or delete your current index of the same name before indexing again with the updated datatype in order to avoid errors due to datatype collisions.

  • post-author-pic
    Sona D
    12-31-2018

    Thank You very much Sir changing the index field really helped.
    Wish you a Happy new year......

  • post-author-pic
    Myles Y
    01-02-2019

    Happy to help!

Looking For Team Training?

Learn More