This document provides an overview and instructions for installing and configuring ProxySQL. It discusses:
1. What ProxySQL is and its functions like load balancing and query caching
2. How to install ProxySQL on CentOS and configure the /etc/proxysql.cnf file
3. How to set up the ProxySQL schema to define servers, users, variables and other settings needed for operation
4. How to test ProxySQL functions like server status changes and benchmark performance
This document explains how to set up ProxySQL to log queries from users connecting directly to the database servers. It details installing and configuring ProxySQL to log queries to binary files, using a tool to convert the binary logs to text format, and setting up an ELK stack to index the query logs and make them searchable in Kibana. Filebeat is configured to ship the text query logs to Logstash, which parses them and sends the data to Elasticsearch. Kibana provides a web interface for viewing and analyzing the query logs.
This is the presentation delivered by Karthik.P.R at MySQL User Camp Bangalore on 09th June 2017. ProxySQL is a high performance MySQL Load Balancer Designed to scale database servers.
This document discusses Patroni, an open-source tool for managing high availability PostgreSQL clusters. It describes how Patroni uses a distributed configuration system like Etcd or Zookeeper to provide automated failover for PostgreSQL databases. Key features of Patroni include manual and scheduled failover, synchronous replication, dynamic configuration updates, and integration with backup tools like WAL-E. The document also covers some of the challenges of building automatic failover systems and how Patroni addresses issues like choosing a new master node and reattaching failed nodes.
Devrim Gunduz gives a presentation on Write-Ahead Logging (WAL) in PostgreSQL. WAL logs all transactions to files called write-ahead logs (WAL files) before changes are written to data files. This allows for crash recovery by replaying WAL files. WAL files are used for replication, backup, and point-in-time recovery (PITR) by replaying WAL files to restore the database to a previous state. Checkpoints write all dirty shared buffers to disk and update the pg_control file with the checkpoint location.
PostgreSQL is an open source relational database management system. It has over 15 years of active development and supports most operating systems. The tutorial provides instructions on installing PostgreSQL on Linux, Windows, and Mac operating systems. It also gives an overview of PostgreSQL's features and procedural language support.
MySQL Administrator
Basic course
- MySQL 螳
- MySQL れ / れ
- MySQL ろ豌 - MySQL ろ襴讌 讌
- MySQL 蟯襴
- MySQL 覦煙 / 覲糾規
- MySQL 覈磯
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
れろ企覦
http://neoclova.co.kr/
Oracle Drivers configuration for High Availability, is it a developer's job?Ludovico Caldara
油
UCP, GridLink, TAF, AC, TAC, FAN The configuration of Oracle Drivers for application high availability is not an easy job. The developers often care about the minimal working configuration, while the DBAs are busy with the operations. In this session I will try to demystify application servers connectivity to the database and give a direction toward the highest availability, using Real Application Clusters and new Oracle features like TAC and CMAN TDM.
The document provides an overview of WebLogic Server topology, configuration, and administration. It describes key concepts such as domains, servers, clusters, Node Manager, and machines. It also covers configuration files, administration tools like the Administration Console and WLST, and some sample configuration schemes for development, high availability, and simplified administration.
The document discusses secondary index searches in MySQL. It describes the process as starting with a search of the secondary index tree to find the primary key. The primary key is then added to an unsorted list. Once all secondary index searches are complete, the primary key list is sorted. The primary index tree is then searched sequentially using the sorted primary key list to retrieve the clustered data records. Finally, the clustered data records are accessed sequentially.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
MySQL Administrator
Basic course
- MySQL 螳
- MySQL れ / れ
- MySQL ろ豌 - MySQL ろ襴讌 讌
- MySQL 蟯襴
- MySQL 覦煙 / 覲糾規
- MySQL 覈磯
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
れろ企覦
http://neoclova.co.kr/
Learning postgresql, Chapter 1: Getting started with postgresql
Remarks
This section provides an overview of what postgresql is, and why a developer might want to use it.
It should also mention any large subjects within postgresql, and link out to the related topics. Since
the Documentation for postgresql is new, you may need to create initial versions of those related
topics.
Devrim Gunduz gives a presentation on Write-Ahead Logging (WAL) in PostgreSQL. WAL logs all transactions to files called write-ahead logs (WAL files) before changes are written to data files. This allows for crash recovery by replaying WAL files. WAL files are used for replication, backup, and point-in-time recovery (PITR) by replaying WAL files to restore the database to a previous state. Checkpoints write all dirty shared buffers to disk and update the pg_control file with the checkpoint location.
PostgreSQL is an open source relational database management system. It has over 15 years of active development and supports most operating systems. The tutorial provides instructions on installing PostgreSQL on Linux, Windows, and Mac operating systems. It also gives an overview of PostgreSQL's features and procedural language support.
MySQL Administrator
Basic course
- MySQL 螳
- MySQL れ / れ
- MySQL ろ豌 - MySQL ろ襴讌 讌
- MySQL 蟯襴
- MySQL 覦煙 / 覲糾規
- MySQL 覈磯
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
れろ企覦
http://neoclova.co.kr/
Oracle Drivers configuration for High Availability, is it a developer's job?Ludovico Caldara
油
UCP, GridLink, TAF, AC, TAC, FAN The configuration of Oracle Drivers for application high availability is not an easy job. The developers often care about the minimal working configuration, while the DBAs are busy with the operations. In this session I will try to demystify application servers connectivity to the database and give a direction toward the highest availability, using Real Application Clusters and new Oracle features like TAC and CMAN TDM.
The document provides an overview of WebLogic Server topology, configuration, and administration. It describes key concepts such as domains, servers, clusters, Node Manager, and machines. It also covers configuration files, administration tools like the Administration Console and WLST, and some sample configuration schemes for development, high availability, and simplified administration.
The document discusses secondary index searches in MySQL. It describes the process as starting with a search of the secondary index tree to find the primary key. The primary key is then added to an unsorted list. Once all secondary index searches are complete, the primary key list is sorted. The primary index tree is then searched sequentially using the sorted primary key list to retrieve the clustered data records. Finally, the clustered data records are accessed sequentially.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
MySQL Administrator
Basic course
- MySQL 螳
- MySQL れ / れ
- MySQL ろ豌 - MySQL ろ襴讌 讌
- MySQL 蟯襴
- MySQL 覦煙 / 覲糾規
- MySQL 覈磯
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
れろ企覦
http://neoclova.co.kr/
Learning postgresql, Chapter 1: Getting started with postgresql
Remarks
This section provides an overview of what postgresql is, and why a developer might want to use it.
It should also mention any large subjects within postgresql, and link out to the related topics. Since
the Documentation for postgresql is new, you may need to create initial versions of those related
topics.
Golang Project Guide from A to Z: From Feature Development to Enterprise Appl...Kyuhyun Byun
油
This comprehensive presentation offers a deep dive into Go language development methodologies, covering projects of all scales. Whether you're working on a small prototype or a large-scale enterprise application, this guide provides valuable insights and best practices.
Key topics covered:
Distinguishing between small and large projects in Go
Code patterns for small, feature-focused projects
Comparison of Handler and HandlerFunc approaches
Enterprise application design using Domain Driven Design (DDD)
Detailed explanations of architectural layers: Presenter, Handler, Usecase, Service, Repository, and Recorder
NoSQL (DynamoDB) modeling techniques
Writing effective test code and using mocking tools like 'counterfeiter'
Essential tools for production-ready applications: APM, error monitoring, metric collection, and logging services
This presentation is ideal for Go developers of all levels, from beginners looking to structure their first projects to experienced developers aiming to optimize large-scale applications. It provides practical advice on code organization, testing strategies, and operational considerations to help you build robust, maintainable Go applications.
Whether you're starting a new project or looking to improve an existing one, this guide offers valuable insights into Go development best practices across different project scales and complexities.
This document discusses strategies for automating remote database backups across multiple data centers. It recommends scheduling backups automatically after a queue time to use underutilized backup servers. The backup manager would select the target backup server based on its service zone, data center location, and available quota to balance load. It would also avoid using the same backup server consecutively and start backups at different times each day to improve reliability in case of failures.
This document discusses MySQL 5.7's JSON datatype. It introduces JSON and why it is useful for integrating relational and schemaless data. It covers creating JSON columns, inserting and selecting JSON data using functions like JSON_EXTRACT. It discusses indexing JSON columns using generated columns. Performance is addressed, showing JSON tables can be 40% larger with slower inserts and selects compared to equivalent relational tables without indexes. Options for stored vs virtual generated columns are presented.
Intro KaKao MRTE (MySQL Realtime Traffic Emulator)I Goo Lee
油
The document describes the process of opening a TCP connection between a client and MySQL database, including the initial handshake and response packets. It then explains how the MRTE-Collector works by using message queues to capture and parse MySQL packets from the source database, and replay them to the target database using multiple SQL player threads. The MRTE-Collector publishes messages to RabbitMQ queues which routes the messages to the proper queues subscribed by MRTE-Player.
MySQL Slow Query log Monitoring using Beats & ELKI Goo Lee
油
This document provides instructions for using Filebeat, Logstash, Elasticsearch, and Kibana to monitor MySQL slow query logs. It describes installing and configuring each component, with Filebeat installed on database servers to collect slow query logs, Logstash to parse and index the logs, Elasticsearch for storage, and Kibana for visualization and dashboards. Key steps include configuring Filebeat to ship logs to Logstash, using grok filters in Logstash to parse the log fields, outputting to Elasticsearch, and visualizing slow queries and creating sample dashboards in Kibana.
MySQL Audit using Percona audit plugin and ELKI Goo Lee
油
This document discusses setting up MySQL auditing using the Percona Audit Plugin and ELK (Elasticsearch, Logstash, Kibana). It describes installing and configuring the Percona Audit Plugin on MySQL servers to generate JSON audit logs. It then covers using Rsyslog or Filebeat to ship the logs to the Logstash server, and configuring Logstash to parse, enrich, and index the logs into Elasticsearch. Finally, it discusses visualizing the audit data with Kibana dashboards containing graphs and searching. The architecture involves MySQL servers generating logs, Logstash collecting and processing them, and Elasticsearch and Kibana providing search and analytics.
4. 4
Intro
Fluentd is a fully free and fully open-source log collector that instantly enables you to have a Log Everything
architecture with 125+ types of systems.
http://docs.fluentd.org
18. 18
Index
1. Intro
2. Install
3. Test
MySQL slowquery logging
MySQL process list logging
Game Log Data Collect
Log Server 蟲豢
4. QnA
19. 19
Step.1 讌 覯襦 Fluentd plugin れ
https://github.com/y-ken/fluent-plugin-mysql-query
https://github.com/shunwen/fluent-plugin-rename-key
# yum -y install ruby-rdoc ruby-devel rubygems
find / -name fluent-gem
/opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-mysql-query
/opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-rename-key
Collector
[ec-ldb-m2]
Service DB
[ec-ldb-s2]
Test.2 MySQL process list logging
20. 20
Collector
[ec-ldb-m2]
Service DB
[ec-ldb-s2]
Test.2 MySQL process list logging
Step.2 讌蠍(Collector) 覯襦 Log 企
$ mysql -u root -p
use test;
drop table if exists test.t_mysql_process;
create table test.t_mysql_process (
log_date datetime default current_timestamp
, hostname varchar(100)
, id bigint
, user varchar(100)
, host varchar(100)
, db varchar(64)
, command varchar(50)
, duration_time bigint
, state varchar(4000)
, info varchar(10000)
);
21. 21
Collector
[ec-ldb-m2]
Service DB
[ec-ldb-s2]
Test.2 MySQL process list logging
Step.3 讌(DB) 覯襦 td-agent.conf れ
$ sudo vi /etc/td-agent/td-agent.conf
<source>
type mysql_query
host ec-ldb-s2
port 19336
database test
username root
password 433dlxjsjf12!@!
interval 1m
tag ec-ldb-s2.processlist
query show full processlist;
record_hostname yes
nest_result no
nest_key data
#row_count yes
#row_count_key row_count
</source>
<match ec-ldb-s2.processlist>
type rename_key
remove_tag_prefix ec-ldb-s2.
append_tag ec-ldb-s2
rename_rule1 Time duration_time
</match>
<match processlist.ec-ldb-s2>
type copy
<store>
type stdout
</store>
<store>
type mysql_bulk
host ec-ldb-m2
port 19336
database test
username root
password testpasswd12#$
column_names hostname,Id,User,Host,db,Command,State,Info,duration_time
key_names hostname,Id,User,Host,db,Command,State,Info,duration_time
table t_mysql_process
flush_interval 5s
</store>
</match>
30. 30
Index
1. Intro
2. Install
3. Test
MySQL slowquery logging
MySQL process list logging
Game Log Data Collect
Log Server 蟲豢
4. QnA
31. 31
Step.1 讌 覯襦 Fluentd plugin れ
https://github.com/tagomoris/fluent-plugin-mysql
# yum -y install ruby-rdoc ruby-devel rubygems
find / -name fluent-gem
/opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-mysql
Collector
[ec-ldb-m2]
Service DB
[ec-ldb-s2]
Test.4 Log Server 蟲豢
Client
Client
Client
Server
Server
HAProxy(L4)
32. 32
Collector
[ec-ldb-m2]
Service DB
[ec-ldb-s2]
Step.2 讌蠍(Collector) 覯襦 Log 企
$ mysql -u root -p
use test;
drop table if exists test.t_log_connect;
create table test.t_log_connect (
log_date datetime default current_timestamp
, jsondata text
);
drop table if exists test.t_log_money;
create table test.t_log_money (
log_date datetime default current_timestamp
, jsondata text
);
Test.4 Log Server 蟲豢
33. 33
Collector
[ec-ldb-m2]
Service DB
[ec-ldb-s2]
Step.3 讌(DB) 覯襦 td-agent.conf れ
$ sudo vi /etc/td-agent/td-agent.conf
<source>
type http
port 8888
body_size_limit 1mb
keepalive_timeout 10s
</source>
<match ec-ldb-s2.t_log_connect>
type copy
<store>
type stdout
</store>
<store>
type mysql
host ec-ldb-m2
port 19336
database test
username root
password testpasswd12#$
table t_log_connect
columns jsondata
format json
flush_interval 5s
</store>
</match>
<match ec-ldb-s2.t_log_money>
type copy
<store>
type stdout
</store>
<store>
type mysql
host ec-ldb-m2
port 19336
database test
username root
password testpasswd12#$
table t_log_money
columns jsondata
format json
flush_interval 5s
</store>
</match>
Test.4 Log Server 蟲豢
34. 34
Collector
[ec-ldb-m2]
Service DB
[ec-ldb-s2]
Step.4 td-agent
$ sudo /etc/init.d/td-agent stop
sudo /etc/init.d/td-agent start
-- 襦蠏 覦
curl -X POST -d 'json={"ver":"1.0","action":"login","user":1}' http://localhost:8888/ec-ldb-s2.t_log_connect
curl -X POST -d 'json={"ver":"1.0","action":"login","user":1}' http://localhost:8888/ec-ldb-s2.t_log_money
tail -f /var/log/td-agent/td-agent.log
Test.4 Log Server 蟲豢