The basic installation script is designed for minimal server specifications and the initial development period.
When your solution transitions to production, we recommend optimizing some settings.
The key setting for FPM is the number of available workers. When an active user makes a request to the server, Nginx redirects it to a copy of PHP loaded into RAM.
The number of these copies waiting for connections is managed by the FPM config. While a copy is inactive, it only consumes RAM (approximately 50Mb per instance). When it receives a task, CPU consumption kicks in.
You need to have at least one copy for each actively working user in the system, plus one for each simultaneous remote connection.
For the MIT version, allocate 2 copies per active user since one is always fully occupied with checking system notification tables (in the PRO version, this is handled by a GO module, freeing up 50Mb RAM per active user).
Configure in:
nano /etc/php/8.3/fpm/pool.d/totum.conf
pm.max_children = 20
pm.start_servers = 10
pm.min_spare_servers = 10
pm.max_spare_servers = 20
pm.max_children
— the maximum number of waiting FPM processes. Set it to 1.5-2 times the number of peak active users, plus one for each simultaneous remote connection. RAM is calculated from this number: pm.max_children * 50Mb = should occupy no more than 25-30% of the system's total RAM.
pm.start_servers
— set to half of pm.max_children
pm.min_spare_servers
= pm.start_servers
pm.max_spare_servers
= pm.max_children
When a process executes, responding to a user request, it writes its data to RAM.
php_admin_value[memory_limit]
— sets the memory limit consumed by one process when executing a request. We do not recommend setting more than 1024Mb (not every request consumes this much — this is the upper limit). If your process crashes due to insufficient memory, you should reprogram the solution.After setting the values, save the configuration file and restart FPM:
service php8.3-fpm restart
For Postgresql, the most important setting is also the RAM available to the database for storing frequently accessed data.
You can find the path to the configuration file by running the command:
su - postgres -c "psql -c 'SHOW config_file;'"
Open the configuration file at the specified path, for example:
nano /etc/postgresql/16/main/postgresql.conf
This is the number of simultaneous connections to the database. It is recommended to set it to twice the number of PHP-FPM workers.
Each connection consumes approximately 5Mb, so max_connections * 5 = RAM used for servicing connections. It should not exceed 25% of the system's total RAM.
If you have free RAM left, i.e., you calculated that you can set shared_buffers to 8Gb, but your entire database is 4Gb, then set shared_buffers = 4Gb and effective_cache_size to 2x shared_buffers = 8Gb (but no more than 80% of total RAM).
By default, it is set to 4. This value is for HDD disks. If you have NVMe, set it to 0.1. If you have SSD without NVMe, set it to 1.0.
Sets the maximum allowable number of parallel IO operations.
100 – for SSD with NVMe.
50 – for SSD without NVMe.
If you have 2-4 CPU cores, do not touch the autovacuum parameters here and further. But if you have a database with millions of rows, you need to allocate 2 CPU cores for autovacuum work.
The thing is, PostgreSQL does not delete rows when they are removed but marks them as obsolete. When a row is modified, it creates a new row by copying the old one and marking the old one as obsolete.
Over time, the database schema gets cluttered and starts to work less efficiently. autovacuum allows the database to reuse old copies of rows.
It is especially necessary if you have a lot of mass delete and update operations on table rows.
autovacuum = on
— uncomment this line.
track_counts = on
— uncomment this as well because autovacuum is useless without it.
autovacuum_naptime = '30s'
— you can leave it as is or set it to 30s.
autovacuum_max_workers
— 1-2 cores, but no more than 25% of the cores in the system.
autovacuum_vacuum_cost_limit
— set it to 1000.
autovacuum_vacuum_cost_delay
— the default is 2ms, which can be left as is.
autovacuum_vacuum_scale_factor
— reduce it to 0.01 (this is 1% of unprocessed rows, the default is 20%).
autovacuum_vacuum_threshold
— this is the minimum number of dead rows in units, you can leave it at 50.
autovacuum_analyze_scale_factor
— set it to 0.01.
autovacuum_work_mem
— sets the amount of memory for pointers to dead rows. By default, it is taken from the maintenance_work_mem parameter. Memory is allocated immediately in full for each worker process separately. So, if you set 128 MB and have 8 worker processes, the system will immediately allocate 1 GB of memory (if all processes are active), even if a smaller amount of memory would be sufficient for them. The allocated amount depends on the number of worker processes. RAM total size / (64 * autovacuum_max_workers).
If you have a lot of free CPU cores, for example, a system with 16-32 cores, and most of the database queries are small but there are a few large ones — for example, a cron job that creates aggregates, you can increase the number of CPU cores available for a single database query.
The parameter max_parallel_workers_per_gather
. By default, it is 2 additional cores.
They are less efficient than the main one executing the query. The efficiency of the connected core is about 70%.
The maximum number of connected cores for the entire system is 8 by default. This is the max_parallel_workers
parameter.
If you have a lot of data, at some point it becomes more efficient to use database server clustering rather than adding cores for parallel scanning, as there is a limit to disk and processor bus speed.
You can cluster up to 1000 servers if needed.
Save the file and restart the database. If you have the PRO version, you need to stop totum-gom first:
service totum-gom stop
service postgresql restart
service totum-gom start
If you are not a professional system administrator, we recommend choosing a server plan where the average CPU load does not exceed 30% during peak hours, and the average RAM usage is no more than 60%.
If you see that the average figures are higher — increase the server plan.
High-speed processors are an advantage over standard ones.
There is also a big difference between dedicated CPU and shared CPU in your VPS plan. Shared means that the host has sold the same processor multiple times. If your solution has many large data selection queries (i.e., you can visually observe select queries running for several seconds when you run htop), then a dedicated CPU will provide better performance as you have the entire core at your disposal, not 30%.
Using HDD disks is not even considered in modern conditions.
When selecting from large tables — set indexes on the fields used for selection (where).
This is done through table settings.
A reasonable number of indexes:
No more than one for every 10 fields in the row part.
No more than three additional indexes per table.