[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: unable to log into testdatabase
From: |
Karsten Hilbert |
Subject: |
Re: unable to log into testdatabase |
Date: |
Fri, 13 May 2022 13:02:13 +0200 |
Dear Gijs,
one of the screenshots lets me recognize a bug which has been
fixed (commit attached below) in v1.8.2. So you'll have to
use at least that which can be downloaded here:
https://www.gnumed.de/downloads/client/1.8/
if you distribution does not have it yet.
Then we can check out other issues you might encounter.
Karsten
commit 786a67f4afe0e8c49243157faa2f11d9b72745f9
Author: Karsten Hilbert <karsten.hilbert@gmx.net>
Date: Mon Jun 15 11:18:20 2020 +0200
Fix faulty use of connection pooling
Starting with psycopg2 2.8 there were field reports
showing on-the-wire problems with database connections.
query failed in RO connection
exc.arg: SSL SYSCALL error: Die Ressource ist zur Zeit
nicht verfügbar
query failed in RO connection
pgerror: [extraneous data in "T" message]
These occurred in non-deterministic places within the
GNUmed code and within processing of one and the same
query, even with entirely trivial queries like
SELECT * FROM dem.v_praxis_branches WHERE true
Eventually, there was one report which showed breakage
while processing the *returned values* of a successful
query. And it showed interleaved data which might have
come from another query resulting in malformed business
objects:
Traceback (most recent call last):
File "/usr/share/gnumed/Gnumed/wxpython/gmGuiMain.py",
line 3457, in OnInit
if not self.__verify_praxis_branch():
File "/usr/share/gnumed/Gnumed/wxpython/gmGuiMain.py",
line 3669, in __verify_praxis_branch
if not
gmPraxisWidgets.set_active_praxis_branch(no_parent = True):
File
"/usr/share/gnumed/Gnumed/wxpython/gmPraxisWidgets.py", line 412, in
set_active_praxis_branch
branches = gmPraxis.get_praxis_branches()
File "/usr/share/gnumed/Gnumed/business/gmPraxis.py",
line 258, in get_praxis_branches
return [ cPraxisBranch(row = {'data': r, 'idx': idx,
'pk_field': 'pk_praxis_branch'}) for r in rows ]
File "/usr/share/gnumed/Gnumed/business/gmPraxis.py",
line 258, in <listcomp>
return [ cPraxisBranch(row = {'data': r, 'idx': idx,
'pk_field': 'pk_praxis_branch'}) for r in rows ]
File
"/usr/share/gnumed/Gnumed/pycommon/gmBusinessDBObject.py", line 337, in __init__
self._init_from_row_data(row = row)
File
"/usr/share/gnumed/Gnumed/pycommon/gmBusinessDBObject.py", line 396, in
_init_from_row_data
self.pk_obj = row['data'][row['idx'][row['pk_field']]]
KeyError: 'pk_praxis_branch'
--- frame [get_praxis_branches]: #258,
/usr/share/gnumed/Gnumed/business/gmPraxis.py -------------------
cmd = SELECT * FROM dem.v_praxis_branches WHERE true
rows = [[8, 'unit of Enterprise Healthcare Unit',
12, None, None, 13, '72129', '72125', 'Enterprise Healthcare Unit', 2,
'Hospital', 'Krankenhaus', None, None]]
idx = {'': 0, 'pk_org': 1, 'pk_org_unit': 2, '_3':
3, '_4': 4, 'pk_org_5': 5, 'xmin_praxis_branch': 6, 'xmin_org_unit': 7,
'praxis': 8, 'pk_category_org': 9, 'ol10n_or': 10, 'l10n_or': 11, '_12': 12,
'l10n_unit_category': 13}
--- frame [<listcomp>]: #258,
/usr/share/gnumed/Gnumed/business/gmPraxis.py -------------------
.0 = <list_iterator object at 0x7f69fb637730>
r = [8, 'unit of Enterprise Healthcare Unit', 12,
None, None, 13, '72129', '72125', 'Enterprise Healthcare Unit', 2, 'Hospital',
'Krankenhaus', None, None]
idx = {'': 0, 'pk_org': 1, 'pk_org_unit': 2, '_3':
3, '_4': 4, 'pk_org_5': 5, 'xmin_praxis_branch': 6, 'xmin_org_unit': 7,
'praxis': 8, 'pk_category_org': 9, 'ol10n_or': 10, 'l10n_or': 11, '_12': 12,
'l10n_unit_category': 13}
--- frame [__init__]: #337,
/usr/share/gnumed/Gnumed/pycommon/gmBusinessDBObject.py -------------------
self = [cPraxisBranch:<uninitialized>]:
: 8 [<class 'int'>]
pk_org: unit of Enterprise Healthcare Unit
[<class 'str'>]
pk_org_unit: 12 [<class 'int'>]
_3: NULL
_4: NULL
pk_org_5: 13 [<class 'int'>]
xmin_praxis_branch: 72129 [<class 'str'>]
xmin_org_unit: 72125 [<class 'str'>]
praxis: Enterprise Healthcare Unit [<class
'str'>]
pk_category_org: 2 [<class 'int'>]
ol10n_or: Hospital [<class 'str'>]
l10n_or: Krankenhaus [<class 'str'>]
_12: NULL
l10n_unit_category: NULL
aPK_obj = None
row = {'data': [8, 'unit of Enterprise Healthcare
Unit', 12, None, None, 13, '72129', '72125', 'Enterprise Healthcare Unit', 2,
'Hospital', 'Krankenhaus', None, None], 'idx': {'': 0, 'pk_org': 1,
'pk_org_unit': 2, '_3': 3, '_4': 4, 'pk_org_5': 5, 'xmin_praxis_branch': 6,
'xmin_org_unit': 7, 'praxis': 8, 'pk_category_org': 9, 'ol10n_or': 10,
'l10n_or': 11, '_12': 12, 'l10n_unit_category': 13}, 'pk_field':
'pk_praxis_branch'}
link_obj = None
--- frame [_init_from_row_data]: #396,
/usr/share/gnumed/Gnumed/pycommon/gmBusinessDBObject.py -------------------
self = [cPraxisBranch:<uninitialized>]:
: 8 [<class 'int'>]
pk_org: unit of Enterprise Healthcare Unit
[<class 'str'>]
pk_org_unit: 12 [<class 'int'>]
_3: NULL
_4: NULL
pk_org_5: 13 [<class 'int'>]
xmin_praxis_branch: 72129 [<class 'str'>]
xmin_org_unit: 72125 [<class 'str'>]
praxis: Enterprise Healthcare Unit [<class
'str'>]
pk_category_org: 2 [<class 'int'>]
ol10n_or: Hospital [<class 'str'>]
l10n_or: Krankenhaus [<class 'str'>]
_12: NULL
l10n_unit_category: NULL
row = {'data': [8, 'unit of Enterprise Healthcare
Unit', 12, None, None, 13, '72129', '72125', 'Enterprise Healthcare Unit', 2,
'Hospital', 'Krankenhaus', None, None], 'idx': {'': 0, 'pk_org': 1,
'pk_org_unit': 2, '_3': 3, '_4': 4, 'pk_org_5': 5, 'xmin_praxis_branch': 6,
'xmin_org_unit': 7, 'praxis': 8, 'pk_category_org': 9, 'ol10n_or': 10,
'l10n_or': 11, '_12': 12, 'l10n_unit_category': 13}, 'pk_field':
'pk_praxis_branch'}
The most likely explanation was two thread sharing a
connection in unhealthy ways. Disabling connection
pooling did remove the problem and, eventually, it turned
out that while refactoring the connection pooler the
argument <pooled> was not passed on properly in one case
leading to a thread getting a pooled connection while it
was supposed to have its very own one. Fixing that
allowed use to re-enable connection pooling.
What remains mysterious, however, is the fact that the
exact same faulty use of a pooled connection use did work
just fine under psycopg2 2.7 while it elicited
on-the-wire corruption under psycopg2 2.8 ...
For the time being retain an (inactive) switch to disable
the pooler (gmConnectionPool._DISABLE_CONNECTION_POOL).
Reported by various users some of which helped greatly in
testing.
--
GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B