狠狠撸

狠狠撸Share a Scribd company logo
HTTP API 日志分析展示系统
域名: 10.73.24.156http://log.mysql.cluster.sina.com.cn/
1、基本架构
使用Logstash+Redis+Elasticsearch+Kibana搭建
2、环境部署
2.1 Logstash Agent的部署环境
部署在api对应的两台机器上
10.73.24.123
10.73.11.169
路径:/data1/mysqlapi_log/logstash
版本:1.4.2
备注:ELK模式对于版本的要求比较高,使用统一的版本
2.2 Logstash Agent的配置文件
input {
file {
type => "nginx_access"
path => [ "/data1/nginx/mysqlapi_nginx.log" ]
}
file {
type => "nginx_error"
path => [ "/data1/nginx/mysqlapi_error.log" ]
}
file {
type => "mysqlapi_log"
path => [ "/data1/mysqlapi/log/*.log" ]
}
}
output {
stdout { codec => rubydebug }
redis {
host => "10.73.24.156"
port => 6379
data_type =>"list"
key => "logstash"
}
}
stdout { codec => rubydebug }debug
2.3 Logstash Agent的启动方式
nohup /data1/mysqlapi_log/logstash/bin/logstash -f
/data1/mysqlapi_log/logstash/etc/logstash.agent.conf &
3、Redis部署
IP: 10.73.24.156 Port:6379 Version:2.6.16
4、Logstash的Index部署
部署的机器IP:10.73.24.156
路径:/data1/mysqlapi/logstash
版本:1.4.2
配置文件:/data1/mysqlapi/logstash/etc/logstash.index.conf
input {
redis {
host => '10.73.24.156'
data_type => 'list'
port => 6379
key => 'logstash'
type => 'redis-input'
}
}
filter{
if [type] == "nginx_error"
{
grok{
match => ["message","%{NIGNX_ERROR_TIMESTAMP:time} [%{LOGLEVEL:loglevel}]
%{BASE10NUM:errornum}#%{BASE10NUM}:(?<msg>[a-z() 0-9*A-Z:]+), client: %{IP:client}, server:
%{HOSTNAME:server}, request: "GET %{URIPATH:interface}%{URIPARAM:interface_param}
(?<netsource>[A-Z/.1-9]+)", upstream: "%{URIPROTO}://%{URIHOST:api_server}%{URIPATHPARAM}",
host: "%{HOSTNAME:host}""]
}
}
mutate {
convert => ["requestTime","float"]
convert => ["port","float"]
}
}
output {
elasticsearch {
host => '10.73.24.156'
embedded => true
}
}
5、Elasticsearch的部署
部署的机器IP:10.73.24.156
路径:/data1/mysqlapi/elasticsearch
版本:1.4.2
安装elasticsearch-servicewrapper
然后解压放到elasticsearch的bin目录中
验证的方式:
[root@titan156 service]# curl -X GET http://localhost:9200
{
"status" : 200,
"name" : "Cassiopea",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.4.2",
"build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c",
"build_timestamp" : "2014-12-16T14:11:12Z",
"build_snapshot" : false,
"lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"
}
Grok是重点,filter功能,有个网站专门调试grok
http://grokdebug.herokuapp.com/
常用的patterns:
https://github.com/logstash/logstash/tree/v1.4.0/patterns
自定义的pattern放置的路径,logstash的Home目录下
patterns/grok-patterns文件
6、部署Kibana
机器IP:10.73.24.156
部署在Nginx的html目录中
/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
kibana的路径:
/usr/local/nginx/html/kibana
修改kibana的config.js文件,设置访问读取的elasticsearch:
elasticsearch: "http://10.73.24.156:9200"
index配置在哪里呢?
7、测试验证
跳转:
8、错误查找方式
9、日志监控报警
部署在api的nginx对应两台机器上
使用定时任务的方式进行管理
对应的调度方式为
#mysqlapi_nginx_monitor
*/5 * * * * python26 /data1/mysqlapi_log/mysqlapi_monitor/mysqlapi_monitor.py > /dev/null 2>&1
原则:是扫描错误日志,如果在5分钟内出现超过200条的错误日志,发报警
然后再结合上面的图查看是哪些接口调用超时

More Related Content

贰尝碍日志分析搭建过程文档

  • 1. HTTP API 日志分析展示系统 域名: 10.73.24.156http://log.mysql.cluster.sina.com.cn/ 1、基本架构 使用Logstash+Redis+Elasticsearch+Kibana搭建 2、环境部署 2.1 Logstash Agent的部署环境 部署在api对应的两台机器上
  • 2. 10.73.24.123 10.73.11.169 路径:/data1/mysqlapi_log/logstash 版本:1.4.2 备注:ELK模式对于版本的要求比较高,使用统一的版本 2.2 Logstash Agent的配置文件 input { file { type => "nginx_access" path => [ "/data1/nginx/mysqlapi_nginx.log" ] } file { type => "nginx_error" path => [ "/data1/nginx/mysqlapi_error.log" ] } file { type => "mysqlapi_log" path => [ "/data1/mysqlapi/log/*.log" ] } } output { stdout { codec => rubydebug } redis { host => "10.73.24.156" port => 6379 data_type =>"list" key => "logstash" } } stdout { codec => rubydebug }debug 2.3 Logstash Agent的启动方式 nohup /data1/mysqlapi_log/logstash/bin/logstash -f /data1/mysqlapi_log/logstash/etc/logstash.agent.conf & 3、Redis部署 IP: 10.73.24.156 Port:6379 Version:2.6.16 4、Logstash的Index部署 部署的机器IP:10.73.24.156 路径:/data1/mysqlapi/logstash 版本:1.4.2 配置文件:/data1/mysqlapi/logstash/etc/logstash.index.conf
  • 3. input { redis { host => '10.73.24.156' data_type => 'list' port => 6379 key => 'logstash' type => 'redis-input' } } filter{ if [type] == "nginx_error" { grok{ match => ["message","%{NIGNX_ERROR_TIMESTAMP:time} [%{LOGLEVEL:loglevel}] %{BASE10NUM:errornum}#%{BASE10NUM}:(?<msg>[a-z() 0-9*A-Z:]+), client: %{IP:client}, server: %{HOSTNAME:server}, request: "GET %{URIPATH:interface}%{URIPARAM:interface_param} (?<netsource>[A-Z/.1-9]+)", upstream: "%{URIPROTO}://%{URIHOST:api_server}%{URIPATHPARAM}", host: "%{HOSTNAME:host}""] } } mutate { convert => ["requestTime","float"] convert => ["port","float"] } } output { elasticsearch { host => '10.73.24.156' embedded => true } } 5、Elasticsearch的部署 部署的机器IP:10.73.24.156 路径:/data1/mysqlapi/elasticsearch 版本:1.4.2 安装elasticsearch-servicewrapper 然后解压放到elasticsearch的bin目录中 验证的方式:
  • 4. [root@titan156 service]# curl -X GET http://localhost:9200 { "status" : 200, "name" : "Cassiopea", "cluster_name" : "elasticsearch", "version" : { "number" : "1.4.2", "build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c", "build_timestamp" : "2014-12-16T14:11:12Z", "build_snapshot" : false, "lucene_version" : "4.10.2" }, "tagline" : "You Know, for Search" } Grok是重点,filter功能,有个网站专门调试grok http://grokdebug.herokuapp.com/ 常用的patterns: https://github.com/logstash/logstash/tree/v1.4.0/patterns 自定义的pattern放置的路径,logstash的Home目录下 patterns/grok-patterns文件 6、部署Kibana 机器IP:10.73.24.156 部署在Nginx的html目录中 /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf kibana的路径: /usr/local/nginx/html/kibana 修改kibana的config.js文件,设置访问读取的elasticsearch: elasticsearch: "http://10.73.24.156:9200" index配置在哪里呢? 7、测试验证
  • 6. 9、日志监控报警 部署在api的nginx对应两台机器上 使用定时任务的方式进行管理 对应的调度方式为 #mysqlapi_nginx_monitor */5 * * * * python26 /data1/mysqlapi_log/mysqlapi_monitor/mysqlapi_monitor.py > /dev/null 2>&1 原则:是扫描错误日志,如果在5分钟内出现超过200条的错误日志,发报警 然后再结合上面的图查看是哪些接口调用超时