Thursday, April 10, 2008
Monday, March 06, 2006
The FreeBSD Diary -- PostgreSQL - analyzing a query to improve speed
The FreeBSD Diary -- PostgreSQL - analyzing a query to improve speed
This is a guy from the FreeBSD ports people, demonstrating the process he went through to optimize a query.
This is a guy from the FreeBSD ports people, demonstrating the process he went through to optimize a query.
Monday, February 27, 2006
Collection of PostgreSQL Performance Tips
Collection of PostgreSQL Performance Tips
This is a list of performance tips from some people that really know their shiznaz.
This is a list of performance tips from some people that really know their shiznaz.
Power PostgreSQL - PerfList
Power PostgreSQL - PerfList
This is decent list of Performance parameters. Some for us:
shared_buffers: Quad PIII server, 4GB RAM, 40GB, 150-connection heavy transaction processing database: 240MB/30000
work_mem: This is expensive, as it's per-sort, per-connection. About 2048K might be good for us. Can increase with more RAM
maintenance_work_mem: This is how much RAM VACUUM and analyze use. Ideally about 50-75% of largest table, but our tables can easily grow past 600MB, so we'll choose about 512MB, since we have 2GB RAM
checkpoint_segments: needs to be larger for transaction heavy and insert heavy dbs. To calculate actual size taken by log:
(checkpoint_segments * 2 +1) * 16MB. Suggsted between 12 and 256, starting conservatively and raising if you see warnings. Will investigate.
effective_cache_size: Tells the query planner largest possible object that can be cached in meory. Should be about 2/3 memory. Will choose 1 gig
Auto-vacuum: The default settings for autovacuum are very conservative, though, and are more suitable for a very small database. I generally use something aggressive like:
This vacuums tables after 400 rows + 40% of the table has been updated or deleted, and analyzes after 100 rows + 30% of the table has been inserted, updated or deleted. The above configuration also lets me set my max_fsm_pages to 50% of the data pages in the database with confidence that that number won't be overrun, causing database bloat.
This is decent list of Performance parameters. Some for us:
shared_buffers: Quad PIII server, 4GB RAM, 40GB, 150-connection heavy transaction processing database: 240MB/30000
work_mem: This is expensive, as it's per-sort, per-connection. About 2048K might be good for us. Can increase with more RAM
maintenance_work_mem: This is how much RAM VACUUM and analyze use. Ideally about 50-75% of largest table, but our tables can easily grow past 600MB, so we'll choose about 512MB, since we have 2GB RAM
checkpoint_segments: needs to be larger for transaction heavy and insert heavy dbs. To calculate actual size taken by log:
(checkpoint_segments * 2 +1) * 16MB. Suggsted between 12 and 256, starting conservatively and raising if you see warnings. Will investigate.
effective_cache_size: Tells the query planner largest possible object that can be cached in meory. Should be about 2/3 memory. Will choose 1 gig
Auto-vacuum: The default settings for autovacuum are very conservative, though, and are more suitable for a very small database. I generally use something aggressive like:
-D -v 400 -V 0.4 -a 100 -A 0.3
This vacuums tables after 400 rows + 40% of the table has been updated or deleted, and analyzes after 100 rows + 30% of the table has been inserted, updated or deleted. The above configuration also lets me set my max_fsm_pages to 50% of the data pages in the database with confidence that that number won't be overrun, causing database bloat.
docs.sun.com: man pages section 1: User Commands
docs.sun.com: man pages section 1: User Commands
Just for my help, this is where the priocntl man page is. To stop the degradation of a CPU bound process, I'm using priocntl -e -c FX -m 60 -p 60
Just for my help, this is where the priocntl man page is. To stop the degradation of a CPU bound process, I'm using priocntl -e -c FX -m 60 -p 60