Version 27 (modified by 15 years ago) ( diff ) | ,
---|
Table of Contents
MySQL ¶
Replacing the simple SQLite database with MySQL allows greater scalability
Notes ¶
- The databases directory contains what web2py knows about your database
- You need to set deployment_settings.base.migrate=True in
models/000_config.py
before you update the code and switch back to deployment_settings.base.migrate=False later on. - In case of database mismatches - check the databases directory for the appropriate .table file and change accordingly.
Installation ¶
apt-get install mysql-server
Install phpMyAdmin ¶
optional, but highly recommended for easier GUI-based administration
apt-get install phpmyadmin
Install Python support for MySQL ¶
apt-get install python-mysqldb
or
wget http://pypi.python.org/packages/source/M/MySQL-python/MySQL-python-1.2.3.tar.gz#md5=215eddb6d853f6f4be5b4afc4154292f tar zxvf MySQL-python-1.2.3.tar.gz cd MySQL-python-1.2.3 python setup.py install
For Windows try this binary:
Create the database ¶
Database should be created within MySQL before Web2Py can see it to add the relevant tables:
mysql -u root -p CREATE DATABASE sahana; GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,ALTER,DROP ON sahana.* TO 'sahana'@'localhost' IDENTIFIED BY 'mypassword'; FLUSH PRIVILEGES;
Configure Eden ¶
Connection pools allow connections to be reused.
The number used is defined in models/000_config.py
How many pools? ¶
Whatever you choose, it always starts with one and grows with the number of concurrent requests up to the value of pools (the max number of concurrent pools). So pools should be the max number of concurrent requests you expect. That is the max value of:
<number of requests/second> / <average time per request in seconds>
Optimisation ¶
For Production instances, give the MySQL database plenty of RAM:
innodb_buffer_pool_size = 2048M innodb_file_per_table innodb_thread_concurrency = 8
Index columns which are sorted on (especially on tables with many records) [we sort by name in most dropdowns, however most of the time this field is unique & so indexed automatically]:
- gis_location.name
- pr_person.first_name
- pr_person.middle_name
- pr_person.last_name
ALTER TABLE `gis_location` ADD INDEX ( `name` ); ALTER TABLE `pr_person` ADD INDEX ( `first_name` ); ALTER TABLE `pr_person` ADD INDEX ( `middle_name` ); ALTER TABLE `pr_person` ADD INDEX ( `last_name` );
These should already be done automatically in models/zzz_1st_run.py
Troubleshooting ¶
Can view the list of expensive queries using: tbc
Replication ¶
In order to have a backup instance, configure replication from the Master node to a Backup node:
- http://www.debiantutorials.net/mysql-database-replication-masterslave/
- http://www.howtoforge.com/how-to-repair-mysql-replication
On Master: ¶
pico /etc/mysql/my.cnf #bind-address = 127.0.0.1 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log binlog_do_db = sahana /etc/init.d/mysql restart mysql -u root -p GRANT REPLICATION SLAVE ON sahana.* to username@'sl.av.e.ip' IDENTIFIED BY 'mypassword'; FLUSH PRIVILEGES; SHOW MASTER STATUS;
Read log filename & position for configuring the slave (see below).
On Slave: ¶
pico /etc/mysql/my.cnf server-id = 2 master-host = ma.st.er.ip master-user = username master-password = mypassword master-connect-retry = 60 replicate-do-db = sahana /etc/init.d/mysql restart mysql -u root -p SLAVE STOP; CHANGE MASTER TO MASTER_HOST='ma.st.er.ip', MASTER_USER='username', MASTER_PASSWORD='mypassword', MASTER_LOG_FILE='log_file', MASTER_LOG_POS=log_position; SLAVE START; crontab -e # m h dom mon dow command 55 * * * * /home/instance/scripts/automysqlbackup 55 * * * * /home/instance/scripts/sync-dir
IP Failover ¶
Can configure the slave to adopt the same IP as the master using a Heartbeat:
If using Slicehost: