Openbravo Issue Tracking System - Openbravo ERP |
| View Issue Details |
|
| ID | Project | Category | View Status | Date Submitted | Last Update |
| 0053735 | Openbravo ERP | 04. Warehouse management | public | 2023-10-20 12:26 | 2024-01-29 12:40 |
|
| Reporter | ngarcia | |
| Assigned To | vmromanos | |
| Priority | urgent | Severity | major | Reproducibility | always |
| Status | closed | Resolution | unable to reproduce | |
| Platform | | OS | 5 | OS Version | |
| Product Version | | |
| Target Version | | Fixed in Version | | |
| Merge Request Status | |
| Review Assigned To | vmromanos |
| OBNetwork customer | OBPS |
| Web browser | |
| Modules | Core |
| Support ticket | 77148 |
| Regression level | |
| Regression date | |
| Regression introduced in release | |
| Regression introduced by commit | |
| Triggers an Emergency Pack | No |
|
| Summary | 0053735: Sharelocks in M_STORAGE_DETAIL caused by UPDATE_M_STORAGE_DETAIL function with big amount of records |
| Description | Sharelocks in M_STORAGE_DETAIL caused by UPDATE_M_STORAGE_DETAIL function with big amount of records |
| Steps To Reproduce | We need to have a environment with 1.5M records in M_STORAGE_DETAIL table
We need to have a high number of AWO tasks pending to be confirmed that call the M_MOVEMENTLINE_TRG
1) Start confirming the tasks (20/25 per minute)
2) Check sharelocks can be observed in Postgres log
3) The openbravo.log shows that Data Import Entries related to OBAWO_Task increase their duration
|
| Proposed Solution | |
| Additional Information | |
| Tags | No tags attached. |
| Relationships | |
| Attached Files | |
|
| Issue History |
| Date Modified | Username | Field | Change |
| 2023-10-20 12:26 | ngarcia | New Issue | |
| 2023-10-20 12:26 | ngarcia | Assigned To | => Triage Omni WMS |
| 2023-10-20 12:26 | ngarcia | OBNetwork customer | => OBPS |
| 2023-10-20 12:26 | ngarcia | Modules | => Core |
| 2023-10-20 12:26 | ngarcia | Support ticket | => 77148 |
| 2023-10-20 12:26 | ngarcia | Triggers an Emergency Pack | => No |
| 2023-10-23 10:01 | malsasua | Steps to Reproduce Updated | bug_revision_view_page.php?rev_id=27018#r27018 |
| 2023-10-24 09:18 | mtaal | Assigned To | Triage Omni WMS => ludmila_ursu |
| 2023-10-24 11:09 | ludmila_ursu | Note Added: 0156225 | |
| 2023-11-20 11:42 | mtaal | Note Added: 0157337 | |
| 2023-11-20 11:42 | mtaal | Status | new => feedback |
| 2023-11-20 13:03 | mtaal | Assigned To | ludmila_ursu => mtaal |
| 2024-01-05 11:30 | mtaal | Note Added: 0158889 | |
| 2024-01-29 12:38 | vmromanos | Status | feedback => scheduled |
| 2024-01-29 12:38 | vmromanos | Assigned To | mtaal => vmromanos |
| 2024-01-29 12:40 | vmromanos | Review Assigned To | => vmromanos |
| 2024-01-29 12:40 | vmromanos | Note Added: 0159860 | |
| 2024-01-29 12:40 | vmromanos | Status | scheduled => closed |
| 2024-01-29 12:40 | vmromanos | Resolution | open => unable to reproduce |
|
Notes |
|
|
|
|
|
|
(0157337)
|
|
mtaal
|
|
2023-11-20 11:42
|
|
Hello,
Thank the customer/partner for the extensive analysis. For now it seems normal behavior. For now we would advice the customer/partner to check with the cloud team to analyze the database and check if the database settings/config can be improved on system level.
grt. Martin |
|
|
|
(0158889)
|
|
mtaal
|
|
2024-01-05 11:30
|
|
Update from Victor on 22nd December:
Asked to involve the partner and in parallel to send us the pgbadger to have more info to analyze. |
|
|
|
|
|
After applying the workaround to reduce the M_Storage_Detail size, the issue is not reproducible anymore. |
|