Notes |
|
|
Please take a look at the attached file: automation-report.tar.gz
It shows the 2.40 automation framework timeout happening in the second test. |
|
|
|
Some more information. The process which hogs the CPU is doing this:
postgres 3773 98.7 1,6 673528 67884 ?? Rs 3:02pm 82:03.99 postgres: tad openbravo 127.0.0.1(52615) SELECT
In addition, looking at the pg_locks table it shows 1768 locks occupied by this very process. |
|
|
|
Meanwhile I've tweaked the PostgreSQL logging to get timestamps and warning of long-running queries (see attachement).
I've grep-ed the pain-points into the attachment 'durations' they take some 70-80 minutes. Now someone way more knowledgeable should take a closer look at these. |
|
|
|
Yet more information: The 'hot' tables in this scenarion are:
openbravo=# select relname,relfilenode from pg_class where oid=40297 or oid=39915 or oid=40434;
relname | relfilenode
-------------------------+-------------
ad_process_scheduling | 40434
ad_model_object_mapping | 40297
ad_client | 39915 |
|
|
|
I did check that process and the inability to work while it is running is intentional. The process does disable triggers and fk-constraints so working while it is running would lead to problems.
However the runtime reported in your case should normally be much better. I did test the 2.50 community virtual appliance and the process to delete the preconfigured client was only about 90s.
@schmidtm: Is it okay to retitle this issue to track/adress the slowness problem after this explanation? |
|
|
|
Yes, it's fine to change the title now that i understand that the behavior itself is intentional.
Nevertheless i would love to get this one resolved, since it's definitely a show-stopper for me. |
|
|
|
Retitle to adress the slowness issue, as the blocking behavior is intentional after discussion with reporter. |
|
|
|
The new Delete client process works quite fast on big clients, so this should no longer happen. |
|