wiki:InstallationGuidelines/MySQL

Version 47 (modified by Fran Boon, 10 years ago) ( diff )

--

MySQL

Replacing the simple SQLite database with MySQL allows greater scalability

Notes

  1. The databases directory contains what web2py knows about your database
  2. You need to set deployment_settings.base.migrate=True in models/000_config.py before you update the code and switch back to deployment_settings.base.migrate=False later on.
  3. In case of database mismatches - check the databases directory for the appropriate .table file and change accordingly.

Installation

apt-get install mysql-server

Install phpMyAdmin

optional, but highly recommended for easier GUI-based administration

apt-get install phpmyadmin

Install Python support for MySQL

apt-get install python-mysqldb

or

wget http://pypi.python.org/packages/source/M/MySQL-python/MySQL-python-1.2.3.tar.gz#md5=215eddb6d853f6f4be5b4afc4154292f
tar zxvf MySQL-python-1.2.3.tar.gz
cd MySQL-python-1.2.3
python setup.py install

For Windows try these sources for installers:

Configuration

You can store the MySQL password for easy access:

vim ~/.my.cnf
[client]
user=root
password=my_password

Create the database

Database should be created within MySQL before Web2Py can see it to add the relevant tables:

mysql -u root -p
 CREATE DATABASE sahana;
 GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,INDEX,ALTER,DROP ON sahana.* TO 'sahana'@'localhost' IDENTIFIED BY 'password';
 FLUSH PRIVILEGES;

Configure Eden

Connection pools allow connections to be reused.

The number used is defined in models/000_config.py

How many pools?

Whatever you choose, it always starts with one and grows with the number of concurrent requests up to the value of pools (the max number of concurrent pools). So pools should be the max number of concurrent requests you expect. That is the max value of:

<number of requests/second> / <average time per request in seconds>

Optimisation

For Production instances, give the MySQL database plenty of RAM:

innodb_buffer_pool_size = 2048M
innodb_file_per_table
innodb_thread_concurrency = 8

Index columns which are sorted on (especially on tables with many records) [we sort by name in most dropdowns, however most of the time this field is unique & so indexed automatically]:

  • gis_location.name
  • pr_person.first_name
  • pr_person.middle_name
  • pr_person.last_name
CREATE INDEX name__idx on gis_location(name);
CREATE INDEX first_name__idx on gis_location(first_name);
CREATE INDEX middle_name__idx on gis_location(middle_name);
CREATE INDEX last_name__idx on gis_location(last_name);

These should already be done automatically by models/zzz_1st_run.py

Possibly useful tips:

Replication

In order to have a backup instance, configure replication from the Master node to a Backup node:

On Master:

pico /etc/mysql/my.cnf
#bind-address = 127.0.0.1
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = sahana

/etc/init.d/mysql restart

mysql -u root -p
GRANT REPLICATION SLAVE ON *.* to 'username'@'sl.av.e.ip' IDENTIFIED BY 'mypassword';
FLUSH PRIVILEGES;
SHOW MASTER STATUS;

Read log filename & position for configuring the slave (see below).

On Slave:

log_file/log_position come from the result of 'show master status' on master (e.g. mysql-bin.0000001/position-id)

pico /etc/mysql/my.cnf
server-id = 2
master-host = ma.st.er.ip
master-user = username
master-password = mypassword
master-connect-retry = 60
replicate-do-db = sahana

/etc/init.d/mysql restart

mysql -u root -p

SLAVE STOP;
CHANGE MASTER TO MASTER_HOST='ma.st.er.ip', MASTER_USER='username', MASTER_PASSWORD='mypassword', MASTER_LOG_FILE='log_file', MASTER_LOG_POS=log_position;
SLAVE START;

crontab -e 
# m h  dom mon dow   command
55 * * * * /home/instance/scripts/automysqlbackup
55 * * * * /home/instance/scripts/sync-dir

IP Failover

Can configure the slave to adopt the same IP as the master using a Heartbeat:

If using Slicehost:

Administration

Command line access: {{ mysql -u root -p <password> show databases; \r <database> show tables; select *; }}} Dump a database to a text file (for import into another server):

mysqldump <database> > <database>.sql

Restore:

mysqladmin --user=root create <database>
mysql <database> < <database>.sql

or

mysql -p
create DATABASE <database>;
\u <database>
\. <database>.sql

Backups

http://sourceforge.net/projects/automysqlbackup

cp automysqlbackup-2.5.1-01.sh /root/automysqlbackup.sh
chmod +x /root/automysqlbackup.sh
vi /root/automysqlbackup.sh
USERNAME=root
PASSWORD=<password>
DBHOST=localhost
DBNAMES="sahana"
BACKUPDIR="/var/spool/mysql"

or:

  • Use mysqlshow to extract all MySQL database names
  • Then mysqldump to export each database to an individual flat file.

Reset Lost root password

Debian:

dpkg-reconfigure mysql-server-5.1

Manually:

/etc/init.d/mysql stop
mysqld_safe --skip-grant-tables &

mysql -u root

UPDATE mysql.user SET Password=PASSWORD('mynewpassword') WHERE User='root';
FLUSH PRIVILEGES;
quit

/etc/init.d/mysql stop
/etc/init.d/mysql start
mysql -u root -p

Troubleshooting

Can view the list of expensive queries by editing /etc/mysql/my.cnf and uncommenting/adding:

log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 2

and restart mysql.

This will log queries that take longer than 2 seconds to run. View with:

tail -f /var/log/mysql/mysql-slow.log

By installing 'mytop', you can see real-time status on the database:

sudo apt-get install mytop
mytop -uUSERNAME -pPASSWORD

type 'o' to force the longest-running queries to the top of the screen.


Outside of this, the usual tools for monitoring memory/cpu usage are effective.

Note that high mysql cpu usage may indicate a high io-wait condition, as many programs do not report iowait seperately from user / sys cpu usage. This hides the real issue. top is your friend. (iowait is time spent reading/writing to io; in this case, disk). High iowait can occur when attempting to view or sort massive quantities of data in eden. Using the above tips for increasing mysql ram usage will give mysql space to do sorts in memory.


DeveloperGuidelinesDataMigration

InstallationGuidelines

Note: See TracWiki for help on using the wiki.