Opened 12 years ago
Closed 10 years ago
#10233 closed defect (fixed)
UnicodeDecode error when saving hours as csv (fix included)
Reported by: | Dolf Andringa | Owned by: | Ryan J Ollos |
---|---|---|---|
Priority: | high | Component: | TracHoursPlugin |
Severity: | critical | Keywords: | unicode csv export |
Cc: | Trac Release: | 0.12 |
Description (last modified by )
When downloading a csv file from the /hours url, sometimes I get a
UnicodeEncodeError: 'ascii' codec can't encode character ... in position 21: ordinal not in range(128)
This error occurs when a user entered unicode characters in tickets/hour logs. When downloading the csv file in queryhours2csv
, this error was thrown.
The included diff fixes the error by properly encoding all strings as utf-8.
Attachments (1)
Change History (13)
Changed 12 years ago by
comment:1 Changed 12 years ago by
Severity: | normal → critical |
---|
comment:2 Changed 12 years ago by
Priority: | normal → high |
---|
comment:3 Changed 12 years ago by
comment:4 Changed 12 years ago by
Status: | new → assigned |
---|
I'm having trouble reproducing the issue. Could you provide a screen capture from just before the export, as well as the traceback you get when exporting?
comment:5 Changed 12 years ago by
Here is the traceback that results:
How to Reproduce
While doing a GET operation on /hours
, Trac issued an internal error.
(please provide additional details here)
Request parameters:
{'format': u'csv'}
User agent: Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.22 (KHTML, like Gecko) Ubuntu Chromium/25.0.1364.160 Chrome/25.0.1364.160 Safari/537.22
System Information
Trac | 1.0.1
|
Babel | 0.9.6
|
Genshi | 0.6 (without speedups)
|
pysqlite | 2.6.0
|
Python | 2.7.3 (default, Aug 1 2012, 05:16:07) [GCC 4.6.3]
|
setuptools | 0.6
|
SQLite | 3.7.9
|
jQuery | 1.7.2
|
Enabled Plugins
ComponentDependencyPlugin | 0.1
|
PrivateReports | 0.4dev
|
TicketSidebarProvider | 0.0-r11698
|
TracHoursPlugin | 0.6.0dev
|
Python Traceback
Traceback (most recent call last): File "/home/user/Workspace/th11047/trac-1.0.1/trac/web/main.py", line 497, in _dispatch_request dispatcher.dispatch(req) File "/home/user/Workspace/th11047/trac-1.0.1/trac/web/main.py", line 214, in dispatch resp = chosen_handler.process_request(req) File "/home/user/Workspace/trachacks2.git/trachoursplugin/trunk/trachours/hours.py", line 240, in process_request return self.process_timeline(req) File "/home/user/Workspace/trachacks2.git/trachoursplugin/trunk/trachours/hours.py", line 522, in process_timeline return self.display_html(req, query) File "/home/user/Workspace/trachacks2.git/trachoursplugin/trunk/trachours/hours.py", line 826, in display_html self.queryhours2csv(req, data) File "/home/user/Workspace/trachacks2.git/trachoursplugin/trunk/trachours/hours.py", line 1032, in queryhours2csv for header in data['headers']]) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-9: ordinal not in range(128)
comment:7 Changed 12 years ago by
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
comment:8 Changed 12 years ago by
Replying to dolfandringa:
This error occurs when a user entered unicode characters in tickets/hour logs. When downloading the csv file in
queryhours2csv
, this error was thrown.
I think it can only happen if the ticket user, summary or similar field contains a unicode character. I can't see that the work comments ever get exported as CSV, regardless of how the query is setup. Please let me know if I'm overlooking something though, or if you can reproduce the issue still after 0.6.0dev-r13210
.
comment:9 Changed 12 years ago by
Description: | modified (diff) |
---|---|
Keywords: | unicode csv export added |
comment:10 Changed 10 years ago by
Resolution: | fixed |
---|---|
Status: | closed → reopened |
The problem still exists if there are headers in UTF (e.g. if trac is localized).
I'm not good at python (it's my first python code), but following changes fixed problem in current trunk.
-
trachours/hours.py
968 968 for groupname, results in data['groups']: 969 969 if groupname: 970 970 writer.writerow(unicode(groupname)) 971 writer.writerow([unicode(header['label']) 972 for header in data['headers']]) 971 972 973 #writer.writerow([unicode(header['label']) 974 # for header in data['headers']]) 975 rowhead = [] 976 for header in data['headers']: 977 rowhead.append(unicode(header['label']).encode('utf-8')) 978 writer.writerow(rowhead) 973 979 for result in results: 974 980 row = [] 975 981 for header in data['headers']:
comment:11 Changed 10 years ago by
Status: | reopened → accepted |
---|
Duplicate of #8520 I think, but thanks for the patch. I will be able to test and apply it later this week.
#8520 closed as a duplicate.