elasticsearch - How to extract variables from log file path, test log file name for pattern in Logstash? -


i have aws elasticbeanstalk instance logs on s3 bucket.

path logs is:

resources/environments/logs/publish/e-3ykfgdfgmp8/i-cf216955/_var_log_nginx_rotated_access.log1417633261.gz 

which translates :

resources/environments/logs/publish/e-[random environment id]/i-[random instance id]/

the path contains multiple logs:

_var_log_eb-docker_containers_eb-current-app_rotated_application.log1417586461.gz _var_log_eb-docker_containers_eb-current-app_rotated_application.log1417597261.gz _var_log_rotated_docker1417579261.gz _var_log_rotated_docker1417582862.gz _var_log_rotated_docker-events.log1417579261.gz _var_log_nginx_rotated_access.log1417633261.gz 

notice there's random number (timestamp?) inserted aws in filename before ".gz"

problem need set variables depending on log file name.

here's configuration:

input {         s3 {                 debug => "true"                 bucket => "elasticbeanstalk-us-east-1-something"                 region => "us-east-1"                 region_endpoint => "us-east-1"                 credentials => ["..."]                 prefix => "resources/environments/logs/publish/"                 sincedb_path => "/tmp/s3.sincedb"                 backup_to_dir => "/tmp/logstashed/"                 tags => ["s3","elastic_beanstalk"]                 type => "elastic_beanstalk"         } }  filter {  if [type] == "elastic_beanstalk" {   grok {     match => [ "@source_path", "resources/environments/logs/publish/%{environment}/%{instance}/%{file}<unnecessary_number>.gz" ]   }  } } 

in case want extract environment , instance , file name path. in file name need ignore random number. doing right way? full, correct solution this?


another question how can specify fields custom log format particular log file above?

this like: (meta-code)

filter {      if [type] == "elastic_beanstalk" {        if [file_name] begins "application_custom_log" {          grok {              match => [ "message", "%{ip:client} %{word:method} %{uripathparam:request} %{number:bytes} %{number:duration}" ]           }        }         if [file_name] begins "some_other_custom_log" {         ....        }      }     } 

how test file name pattern?

for first question, , assuming @source_path contains full path, try:

match => [ "@source_path", "logs/publish/%{notspace:env}/%{notspace:instance}/%{notspace:file}%{number}%{notspace:suffix}" ] 

this create 4 logstash field you:

  • env
  • instance
  • file
  • suffix

more information available on grok man page , should test grok debugger.

to test fields in logstash, use conditionals, e.g.

if [field] == "value" if [field] =~ /regexp/ 

etc.

note it's not necessary grok. can have multiple 'match' arguments, , (by default) stop after hitting first 1 matches. if patterns exclusive, should work you.


Comments

Popular posts from this blog

python - mat is not a numerical tuple : openCV error -

c# - MSAA finds controls UI Automation doesn't -

wordpress - .htaccess: RewriteRule: bad flag delimiters -