#12312 closed defect (fixed)
Segmentation fault migrating very large table
Reported by: | Ryan J Ollos | Owned by: | Jun Omae |
---|---|---|---|
Priority: | normal | Component: | TracMigratePlugin |
Severity: | normal | Keywords: | |
Cc: | Trac Release: |
Description (last modified by )
I was attempting to migrate an SQLite database to PostgreSQL last evening (in-place migration). The SQLite database is 4 GB on disk. The migration halts on bitten_report_item
table a little under a minute after it finishes migrating the previous table (bitten_report
), but before reporting that any rows have been migrated. bitten_report_item
is the 6th or 7th table to be migrated. The only reported error is Segmentation Fault (I don't have the exact output at the moment, but I'll try to post that later). I guess it halts somewhere around: tracmigrateplugin/0.12/tracmigrate/admin.py@14462:155-163#L123.
I don't have direct access to the database. I've been advising someone on the work remotely, writing the migration scripts and observing execution over video conference. The data in the database is proprietary so I can't have the entire database transferred to me, and I don't have direct remote access due to the firewall. That's just a lame explanation of why I don't have a better bug report, and might limit my ability to reproduce things.
I'm going to try and reproduce the situation by creating a database of similar size with contrived data. I imagine more information is needed, but I just wanted to open the ticket ahead of time to see if you had an idea of what the failure mode might be with a very large table, assuming that is the cause. Should the migration be significantly affected by the amount of memory in the system? The server has 8 GB, but could try the migration on a workstation with more memory.
Attachments (1)
Change History (14)
comment:1 Changed 10 years ago by
Description: | modified (diff) |
---|
comment:2 Changed 10 years ago by
comment:3 Changed 10 years ago by
I'm sure we were using the latest version of the plugin because the Trac instance was installed on a new server yesterday (Ubuntu 14.04) using script that fetches the latest sources:
wget http://trac-hacks.org/browser/tracmigrateplugin/0.12?format=zip -O src/tracmigrateplugin.zip
I could be wrong on some of the details such as how long it paused. I'll run through it again tomorrow evening and record the events more carefully.
comment:4 Changed 10 years ago by
The migration was run again and additional information gathered. I have a video recording of the session that I can share with you, but I don't want to post it public so I'll email you directly.
System information:
- Python 2.7.6
- SQLite 3.8.2
- PySQLite 2.6.0 (via Python 2.7.6) - later upgraded to PySQLite 2.6.3 with same results that I'll describe next.
The bitten_report_item
has 58 million rows. The bitten_report_item
migration starts at 11:04 in the video and there's a pause of about 2 minutes after the following output:
bitten_report_item table...
and before the result:
bitten_report_item table... ./conf/migrate_database.sh: line 17: 5116 Segmentation fault pve/bin/trac-admin "$ENV" migrate -i "postgres://$DB_USER:$DB_PASS@localhost/$DB_NAME"
Memory usage of the trac-admin
process jumps from 5% to 62% at 11:32, climbs to 92% by 11:51 and stays there until the Segmentation Fault at 13:11. I assume it starts to page to disk. Memory usage was at 2% when migrating smaller tables, at 10% when migrating bitten_log_table
which has 12.5 million records.
Changed 10 years ago by
Attachment: | t12312-huge-sqlite.diff added |
---|
comment:5 Changed 10 years ago by
Thanks for the details and the video.
I cannot reproduce it yet. I created 1.2 million rows in node_change
table however the memory usage was 1.5% of 2 GB. If eager cursor in SQLiteConnection
is used, the usage was 12.4%.
After t12312-huge-sqlite.diff patch, the plugin fetches and inserts each 1,000 rows using rowid
column in SQLite. Could you please try the patch?
comment:6 Changed 10 years ago by
Thank you for the patch. I should be able to test within 24 hours and will report back.
comment:7 Changed 10 years ago by
I tested the patch and we encountered the same issue. Next, tried appending ?cursor= to the [trac]
database
string and it appears to be working now! Memory usage is low, in the 1-2% range. The table migration is still in progress and is estimated to complete in 4 hours. I'll report back when it's done. It looks like eager cursor is not disabled for in-place migrations?: tracmigrateplugin/0.12/tracmigrate/admin.py@14462:34,38,51-52,57#L32.
comment:8 Changed 10 years ago by
Status: | new → accepted |
---|
Thanks for the investigating! You're right. I hadn't rightly tested with in-place migration....
After the following patch, it would confirm disabled eager cursor if SQLite is using.
-
tracmigrate/admin.py
48 48 dst_env = self._create_env(env_path, dburi) 49 49 src_dburi = self.config.get('trac', 'database') 50 50 src_db = get_connection(self.env) 51 if src_dburi.startswith('sqlite:'):52 src_db.cnx._eager = False # avoid uses of eagar cursor53 51 dst_db = get_connection(dst_env) 54 52 self._copy_tables(src_db, dst_db, src_dburi, dburi) 55 53 self._copy_directories(src_db, dst_env) … … 65 63 dir=os.path.dirname(self.env.path)) 66 64 try: 67 65 dst_env = self._create_env(env_path, dburi) 68 src_db = self.env.get_read_db()69 dst_db = dst_env.get_db_cnx()66 src_db = get_connection(self.env) 67 dst_db = get_connection(dst_env) 70 68 self._copy_tables(src_db, dst_db, src_dburi, dburi, inplace=True) 71 69 del src_db 72 70 del dst_db … … 123 121 def _copy_tables(self, src_db, dst_db, src_dburi, dburi, inplace=False): 124 122 self._printout('Copying tables:') 125 123 124 if src_dburi.startswith('sqlite:'): 125 src_db.cnx._eager = False # avoid uses of eagar cursor 126 126 src_cursor = src_db.cursor() 127 if src_dburi.startswith('sqlite:'): 128 from trac.db.sqlite_backend import PyFormatCursor 129 if type(src_cursor.cursor) is not PyFormatCursor: 130 raise AssertionError('src_cursor.cursor is %r' % 131 src_cursor.cursor) 127 132 src_tables = set(self._get_tables(src_dburi, src_cursor)) 128 133 cursor = dst_db.cursor() 129 134 tables = set(self._get_tables(dburi, cursor)) & src_tables
comment:9 Changed 10 years ago by
The table migration completed successfully. The patch attachment:t12312-huge-sqlite.diff was used along with ?cursor=
on the database string. I can test the patch in comment:8 when another migration is done sometime in the next week or so. For what it's worth the changes look good and sensible to me. Thanks for the patch!
comment:10 Changed 10 years ago by
Another run of the migration was done with the patch from comment:8 and it succeeded (using database = sqlite:db/trac.db
). Memory use was low (~1%) throughout the migration.
comment:13 Changed 9 years ago by
After [14614] I've seen a 6 hour migration reduced to less than 30 minutes!
Hmm. The plugin disables prefetch with SQLite database for large database, at tracmigrateplugin/0.12/tracmigrate/admin.py@14462:51-52#L38. So, I consider the migration doesn't halt on such the large
bitten_report_item
table under a minute.Could you please confirm using latest of the plugin?