This document discusses how Bank of Brazil uses PostgreSQL to process a high volume of documents every day as part of its multi-document processing application. It faces challenges from high concurrency with over 22,000 stations accessing the single PostgreSQL database. To address this, Bank of Brazil implemented solutions like PgBouncer for connection pooling, caching with Memcached, advisory locks, temporary tables, and partitioning of 24 tables. These solutions helped Bank of Brazil scale its PostgreSQL database to meet the heavy processing demands of scanning, recognizing, and managing documents from its over 6,000 branches.
Convert to study materialsBETA
Transform any presentation into ready-made study material¡ªselect from outputs like summaries, definitions, and practice questions.
1 of 13
Download to read offline
More Related Content
High concurrency with Postgres
1. High Concurrency with Postgres
Bank of Brazil in real life
F?abio Telles Rodriguez
Timbira - The Brasilian Postgres Company
February 4, 2016
PGConf Russia
2. About me
F?abio Telles Rodrigues
Open software evangelist +15 years
PostgreSQL DBA +10 year
Contributor of the brazilian PostgreSQL community
Blog: http://savepoint.blog.br (Portuguese only)
@telles
5. Bank of Brasil
Founded at 1808, oldest bank in Brazil
59% of shares belong to the government
The only bank that has branches in all Brazilian cities
more then 110.000 direct employees
6. Multi Document Processing app
Developed by Bull (now Atos)
Development started as a client/server application 15 years
ago
Mostly written in C++
Perform a decentralized dematerialization of documents
combined with a complex central processing of then
Each document have di?erentdiferent work?ows
7. Multi Document Processing app
Documents are captured using scanners at branches
Local image recognition (reach about 85% of success)
Shipping images and meta data to a central point (images are
not stored inside database)
Central image recognition using third-party tools like ABBYY
and A2iA.
Manual image recognition
Other manual interactions like authorizations and deviations
Thousands of business rules for each document
Complex interaction with many legacy systems
8. Real life
6,364 branches
22,432 stations using application (will double by the end of
the year)
25 servers making central recognition
12 application servers
200 stations making manual recognition
Hundreds of leaders requesting reports on peak processing
One PostgreSQL database
10. In a regular day
check clearing: 600,000 (2 milion in the busiest day)
escrow checks: 70,000
signature cards: 30,000
non ?nancial documents: 50,000
Critical window between 4pm to 7pm
Growth of 10GB
80GB of archives generated
11. Challenges
High number of connections
Locks in processing queues
High number of transactions
Small processing window
Many heavy queries for reports
Need to keep the information for two years in the database
New features being implemented constantly
12. Adopted solutions
3 PgBouncer instances
Mencached + Listen / Notify to spread information across the
stations
Strict control of transactions
The queue for image recognition was implemented in memory
and integrated inside database with PL/Peru using sockets
Use of advisory locks in other queues
Memory adjustments for specifc users
Vaccum and ?llfactor adjustments for speciic tables
Partition on 24 tables
Use of temporary tables and unlogged tables
Redesign critical process