Payara Scale (Hazelcast Enterprise) on AzureYoshio Terada
?
This is the HoL to create the Paraya Scale on Azure.
In fact, if use got the license from Payara or Hazelcast, you can create multi region cluster or On premiss and public Hybrid cluster. For mission critical environment, the solution is very useful.
Rhebok, High Performance Rack Handler / Rubykaigi 2015Masahiro Nagano
?
This document discusses Rhebok, a high performance Rack handler written in Ruby. Rhebok uses a prefork architecture for concurrency and achieves 1.5-2x better performance than Unicorn. It implements efficient network I/O using techniques like IO timeouts, TCP_NODELAY, and writev(). Rhebok also uses the ultra-fast PicoHTTPParser for HTTP request parsing. The document provides an overview of Rhebok, benchmarks showing its performance, and details on its internals and architecture.
This document discusses strategies for optimizing access to large "master data" files in PHP applications. It describes converting master data files from PHP arrays to tab-separated value (TSV) files to reduce loading time. Benchmark tests show the TSV format reduces file size by over 50% and loading time from 70 milliseconds to 7 milliseconds without OPcache. Accessing rows as arrays by splitting on tabs is 3 times slower but still very fast at over 350,000 gets per second. The TSV optimization has been used successfully in production applications.
Stream processing in Mercari - Devsumi 2015 autumn LTMasahiro Nagano
?
This document discusses Mercari's use of stream processing to monitor logs and metrics. It describes how Mercari previously used scripts to parse logs periodically, which was inefficient. Mercari now uses Norikra, an open source stream processing tool, to ingest logs and metrics in real-time and perform analytics using SQL queries. Norikra provides benefits over their previous approach like no need to restart processes and the ability for any engineer to write SQL queries. The results are then sent to monitoring tools like Mackerel for alerting and graphing.
Gazelle - Plack Handler for performance freaks #yokohamapmMasahiro Nagano
?
1) Gazelle is a fast PSGI/Plack HTTP server written in Perl and C code.
2) Benchmarks show it can handle 3x more requests per second than other servers for simple applications.
3) Its speed comes from optimizations like using accept4, writev system calls, and being written mostly in fast C code via XS.
This document discusses the memory usage of Perl-based web applications running in a multi-process prefork model with MaxRequestsPerChild configuration. It notes that this model ensures memory is reliably freed when processes exit after fulfilling a set number of requests. It allows for temporary large memory allocations or memory leaks to be tolerated. The operator needs to monitor for irregular increases in memory usage and respond accordingly.
This document discusses several Perl modules:
- Time::Crontab parses crontab date and time fields. Proclet supports cron-like jobs.
- Apache::LogFormat::Compiler had issues with daylight saving time changes but version 0.14 and higher fixed this.
- POSIX::strftime::Compiler was created to avoid issues with locales affecting strftime outputs.
- Modules like Time::TZOffset, HTTP::Entity::Parser, WWW::Form::UrlEncoded, and WWW::Form::UrlEncoded::XS were created with performance improvements over existing solutions. Benchmark results showed the XS implementations having significantly better performance.
Apache::LogFormat::Compiler YAPC::Asia 2013 Tokyo LT-ThonMasahiro Nagano
?
This story describes the development of the Apache::LogFormat::Compiler (ALFC) module by an operations engineer to optimize logging performance in a web application. The original PM::AccessLog module was identified as a performance bottleneck by profiling tools. Several optimizations were tried, including the PM::AxsLog middleware, but it only supported fixed log formats. The operations engineer then created ALFC to compile log formats to Perl code for improved performance. It allowed the AxsLog middleware to be updated, achieving a 5x performance gain in logging. This addressed the original developer's need to customize log formats and store additional fields in logs.
25. my $memos = $self->dbh->select_all(
'SELECT * FROM memos WHERE is_private=0 ORDER BY
created_at DESC, id DESC LIMIT 100'
);
for my $memo (@$memos) {
$memo->{username} = $self->dbh->select_one(
'SELECT username FROM users WHERE id=?',
$memo->{user},
);
}
webapp/perl/lib/Isucon3/Web.pm
100回ルーーーープ
“/”
27. id user_id id name
memosテーブル usersテーブル
id user_id name
memos JOIN users ON memos.user_id = user.id
28. my $memos = $self->dbh->select_all(
'SELECT memos.*,users.username
FROM memos JOIN users ON memos.user = users.id
WHERE memos.is_private=0
ORDER BY memos.created_at DESC,
memos.id DESC
LIMIT 100'
);
webapp/perl/lib/Isucon3/Web.pm
“/”,“/recent”
31. SELECT * FROM memos WHERE is_private=0 ORDER BY
created_at DESC LIMIT 100
id is_priv
ate
...
0
0
1
0
1
memosテーブル
id is_priv
ate
...
0
0
0
SORT
webapp/perl/lib/Isucon3/Web.pm
indexがないと
32. indexをつくる
cat <<'EOF' | mysql -u isucon isucon
ALTER TABLE memos ADD INDEX (is_private,created_at);
EOF
init.sh
43. cat <<'EOF' | mysql -u isucon isucon
ALTER TABLE memos ADD COLUMN title text;
UPDATE memos SET
title = substring_index(content,"n",1);
EOF
init.sh
titleカラムの追加