|Version 30 (modified by 13 years ago) ( diff ),|
Writing code which runs fast.
NB Performance is not everything, and you should not optimize any code without actually having a problem with it. In most cases, response times of around 600ms are totally acceptable (or a minor compared with the average download times on-site).
There are more things to consider besides performance, e.g. usability and maintainability of code. New contributors should be able to easily understand the code, and bugs should be easy to find.
- If a specific inner-loop routine cannot be optimised in Python, then consider writing a C routine for this use case.
- Optimize the models, throw away what we don't need. Every field counts.
Especially problematic in view of performance are references (joins), as they execute implicit DB requests. The more complex references are, the slower the model loads.
- Function definitions in models do _NOT_ harm - the functions are not executed when the model is loaded, but just compiled - pre-compilation of the whole application gives a speed-up of just 10ms (compare to the total execution times!).
In contrast to that, module-level commands (which are executed at time of loading of the model, e.g. CRUD string definitions) slow it down. Suggestion: put them into a "config" function for that model, and call only as needed.
- Avoid _implicit_ redirects! (that is, without user interaction, e.g. as in open_module. There may be redirects that cannot be avoided.).
A redirect simply doubles the response time (executes a new request and thus loads it all again).
- Be careful with Ajax - this might work nicely in local environments, but in real-world deployments this has shown to be unreliable and slow.
- Python runs very fast as opposed to DB queries, so it can be much faster to retrieve 100 rows in one query and then drop 95 of them in the Python code, than to retrieve 5 rows in 5 queries (=do not loop over queries, better loop over the results).
- Consider having configurations which are read from DB frequently but written-to rarely, be set in configuration files which are written-out from the DB (like the CSS from themes)
NB These vary on cases, so use the Profiler to see how they work in your cases...
for i in range(0, len(rows)): row = rows[i]
runs much faster than:
for row in rows:
(0.05 vs. 0.001 seconds in one test case, 2x improvement in another & a slight negative improvement in a 3rd).
value = db(table.id == id).select(table.field, limitby=(0, 1)).first()
runs 1.5x faster than:
value = table[id].field
(0.012 vs. 0.007 seconds vs in a test case)
NB If only expecting one record then the limitby provides a big speedup!
- Web2Py can use cProfile:
web2py.py -F profiler.log
- or if running as service, edit
profiler_filename = 'profiler.log'
- YSlow plugin for Firebug: http://developer.yahoo.com/yslow/
- You can also use Pylot to test the application's behavior under load, and get more reliable results (+ in a nicer report form).
I tested the "Welcome" application and found that it executes up to 86 request/second on my local environment. A similar value has been reported to the web2py group, and it seems to be the maximum we can expect (considering that the "Welcome" application is really thin). UltraCore requires response times for interactive views strictly below 250ms on an average computer, so that we can execute up to 4 requests/second. That sounds perhaps very slow, but compared with what we currently have, this would be a 4x speed-up. So, if you implement a new view, please check whether it loads that fast on your local computer (use FireBug to test), and if not - look at first at the model, then at the static contents (ExtJS? Load only necessary components, not ext_all.js!), and then at the controller (you will find that the controller is mostly the fastest component of all).
Golden Rules for DB Queries
These "rules" might seem a matter of course, however, sometimes you need to take a second look at your code:
- Use joins - one complex query is usually more efficient than multiple simple queries (and gives the DB server a chance to optimize):
codes = db(db.mytable.name == name).select() for code in codes: records = db(db.othertable.code == code).select()
rows = db((db.mytable.name == name) & (db.othertable.code == db.mytable.code)).select() for row in rows: mytable_record = row.mytable othertable_record = row.othertable
- Ask exactly for what you expect (=limit your query):
- if you expect only one result, then limit the search by limitby:
db(db.mytable.id == id).select().first()
db(db.mytable.id == id).select(limitby=(0,1)).first()
- if you need only certain fields of a record, then don't ask for all:
my_value = db(db.mytable.id == id).select(limitby=(0,1)).first().value
my_value = db(db.mytable.id == id).select(db.mytable.value, limitby=(0,1)).first().value
- Don't ask twice for the same record. Look down your code whether your need the same record later on again:
my_value = db(db.mytable.id == id).select(db.mytable.value, limitby=(0,1)).first().value ... other_value = db(db.mytable.id == id).select(db.mytable.other_value, limitby=(0,1)).first().other_value
row = db(db.mytable.id == id).select(db.mytable.value, limitby=(0,1)).first() if row: my_value = row.value other_value = row.other_value
- Don't loop over queries, if you can avoid id. Sometimes it is not as easy to see as in the following example:
for id in ids: my_record = db(mytable.id == id).select().first() ...
records = db(mytable.id.belongs(ids)).select() for record in records: ...