Power PostgreSQL - PerfList
Power PostgreSQL - PerfList
This is decent list of Performance parameters. Some for us:
shared_buffers: Quad PIII server, 4GB RAM, 40GB, 150-connection heavy transaction processing database: 240MB/30000
work_mem: This is expensive, as it's per-sort, per-connection. About 2048K might be good for us. Can increase with more RAM
maintenance_work_mem: This is how much RAM VACUUM and analyze use. Ideally about 50-75% of largest table, but our tables can easily grow past 600MB, so we'll choose about 512MB, since we have 2GB RAM
checkpoint_segments: needs to be larger for transaction heavy and insert heavy dbs. To calculate actual size taken by log:
(checkpoint_segments * 2 +1) * 16MB. Suggsted between 12 and 256, starting conservatively and raising if you see warnings. Will investigate.
effective_cache_size: Tells the query planner largest possible object that can be cached in meory. Should be about 2/3 memory. Will choose 1 gig
Auto-vacuum: The default settings for autovacuum are very conservative, though, and are more suitable for a very small database. I generally use something aggressive like:
This vacuums tables after 400 rows + 40% of the table has been updated or deleted, and analyzes after 100 rows + 30% of the table has been inserted, updated or deleted. The above configuration also lets me set my max_fsm_pages to 50% of the data pages in the database with confidence that that number won't be overrun, causing database bloat.
This is decent list of Performance parameters. Some for us:
shared_buffers: Quad PIII server, 4GB RAM, 40GB, 150-connection heavy transaction processing database: 240MB/30000
work_mem: This is expensive, as it's per-sort, per-connection. About 2048K might be good for us. Can increase with more RAM
maintenance_work_mem: This is how much RAM VACUUM and analyze use. Ideally about 50-75% of largest table, but our tables can easily grow past 600MB, so we'll choose about 512MB, since we have 2GB RAM
checkpoint_segments: needs to be larger for transaction heavy and insert heavy dbs. To calculate actual size taken by log:
(checkpoint_segments * 2 +1) * 16MB. Suggsted between 12 and 256, starting conservatively and raising if you see warnings. Will investigate.
effective_cache_size: Tells the query planner largest possible object that can be cached in meory. Should be about 2/3 memory. Will choose 1 gig
Auto-vacuum: The default settings for autovacuum are very conservative, though, and are more suitable for a very small database. I generally use something aggressive like:
-D -v 400 -V 0.4 -a 100 -A 0.3
This vacuums tables after 400 rows + 40% of the table has been updated or deleted, and analyzes after 100 rows + 30% of the table has been inserted, updated or deleted. The above configuration also lets me set my max_fsm_pages to 50% of the data pages in the database with confidence that that number won't be overrun, causing database bloat.
0 Comments:
Post a Comment
<< Home