29回勉強会資料「PostgreSQLのリカバリ超入門」
See also http://www.interdb.jp/pgsql (Coming soon!)
初心者向け。PostgreSQLのWAL、CHECKPOINT、 オンラインバックアップの仕組み解説。
これを見たら、次は→ http://www.slideshare.net/satock/29shikumi-backup
This document discusses optimizations for Java programs to better utilize CPUs, especially newer CPU instructions. It covers how Java code is compiled to bytecode then JIT compiled to machine code at runtime. Improvements in OpenJDK 9-11 are highlighted, including support for Intel AVX-512, fused multiply-add, SHA extensions, and reducing penalties when switching between instruction sets. Optimizing math functions and string processing with SIMD is also discussed.
This document discusses optimizations for Java programs to better utilize CPUs, especially newer CPU instructions. It covers how Java code is compiled to bytecode then JIT compiled to machine code at runtime. Improvements in OpenJDK 9-11 are highlighted, including support for Intel AVX-512, fused multiply-add, SHA extensions, and reducing penalties when switching between instruction sets. Optimizing math functions and string processing with SIMD is also discussed.
[db tech showcase Tokyo 2015] B24:最高峰の可用性 ~NonStop SQLが止まらない理由~ by 日本ヒューレット?パ...Insight Technology, Inc.
?
Jim GrayにJerry Held, Karel Youseffi が設計した Ingresを源流に持つ由緒正しいRDBMS。ミッションクリティカル目的にこんな実装をするNonStop SQL。これを知れば絶対に使ってみたくなる。「止まりませんように」、と祈りつつ使う時代は終わりにしませんか。トランザクションをあらゆる障害でも失わない実装、その時メモリー内でどのように動くのか、ディスクドライバーは信用できるのか、トランザクションを失わず、性能も確保、そんな盾矛を両立させる技術をご紹介します。さらに、「それって古臭くない」、そんなことないんです。今やオープンなインターフェイスで開発いただいて結構なんです。インフラが、NonStop SQL があなたのデータをがっちり守ります、「ひと」ではなく「コンピュータ」が。是非実感しに来てください。
This document discusses the application of PostgreSQL in a large social infrastructure project involving smart meter management. It describes three main missions: (1) loading 10 million datasets within 10 minutes, (2) saving data for 24 months, and (3) stabilizing performance for large scale SELECT statements. Various optimizations are discussed to achieve these missions, including data modeling, performance tuning, reducing data size, and controlling execution plans. The results showed that all three missions were successfully completed by applying PostgreSQL expertise and customizing it for the large-scale requirements of the project.