Skip to main content
Solved

Difficulty Importing Large Snapshots

  • February 19, 2025
  • 6 replies
  • 100 views

I am working in Acumatica 24R2 Build 24.205.0015. Currently, I am trying to update my local instance to be more up to date with our live instance. I took a snapshot of my Local, using the “Exclude Wiki and Attachments” setting.  However, the snapshot file was larger than what is natively supported in Acumatica (My file size if 1.21GB). As a result, I followed the instructions from this link allow me to import larger snapshot files.

https://community.acumatica.com/maintenance-and-troubleshooting-229/how-to-increase-the-size-limit-for-a-snapshot-file-being-uploaded-150?tid=150&fid=229

 

Now, I am able to upload the snapshot into my local tenant, however I am unable to actually restore the snapshot to the instance. Every time I try, I run into the following error. 

“An error occurred while importing data into the 'GLTran' table.”

  With the error stated being “System.IO.IOException: Stream was too long.”. Below I will also post the trace log that occurs.
 

Since the snapshot method did not work, I tried the method recommended in this post:

https://community.acumatica.com/maintenance-and-troubleshooting-229/how-to-import-a-large-snapshot-using-the-acumatica-erp-configuration-wizard-387

 

Unfortunately, even this does not seem to work perfectly due a couple of reasons. There does not seem to be resolution in relation to the loss of data in Usr Fields. There is also a complication related to a customization created by my service provider when I use this method.

Do we have a way to reliably use and restore large snapshots?  I can’t imagine Acumatica would support the creation of a large snapshot if it would never be able to actually preform a restore with it.

 

Here is the trace for the error that occurs when importing the snapshot.

System.IO.IOException: Stream was too long.

 at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)

 at System.IO.Stream.InternalCopyTo(Stream destination, Int32 bufferSize)

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter..MoveNext()

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter.

.MoveNext()

 at PX.BulkInsert.Provider.TransferTableTask.Executor.[1].MoveNext()

 at PX.DbServices.Model.DataSet.PxDataRows.

.MoveNext()

 

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

System.Exception: Stream was too long. Table name: GLTran. File name: . Line number: 0. ---> System.IO.IOException: Stream was too long.

 at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)

 at System.IO.Stream.InternalCopyTo(Stream destination, Int32 bufferSize)

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter..MoveNext()

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter.

.MoveNext()

 at PX.BulkInsert.Provider.TransferTableTask.Executor.[1].MoveNext()

 at PX.DbServices.Model.DataSet.PxDataRows.

.MoveNext()

 

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

   --- End of inner exception stack trace ---

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

 at PX.DbServices.Points.DbmsBase.DbmsTableAdapter.WriteRows(IEnumerable`1 rows, Boolean exclusiveWrite, Action`1 transferObserver)

 at PX.BulkInsert.Provider.TransferTableTask.Executor.Start(DataTransferObserver observer)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.RunSingleTask(TransferTableTask task)

PX.Data.PXException: An error occurred while importing data into the 'GLTran' table. ---> System.Exception: Stream was too long. Table name: GLTran. File name: . Line number: 0. ---> System.IO.IOException: Stream was too long.

 at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)

 at System.IO.Stream.InternalCopyTo(Stream destination, Int32 bufferSize)

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter..MoveNext()

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter.

.MoveNext()

 at PX.BulkInsert.Provider.TransferTableTask.Executor.[1].MoveNext()

 at PX.DbServices.Model.DataSet.PxDataRows.

.MoveNext()

 

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

   --- End of inner exception stack trace ---

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

 at PX.DbServices.Points.DbmsBase.DbmsTableAdapter.WriteRows(IEnumerable`1 rows, Boolean exclusiveWrite, Action`1 transferObserver)

 at PX.BulkInsert.Provider.TransferTableTask.Executor.Start(DataTransferObserver observer)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.RunSingleTask(TransferTableTask task)

   --- End of inner exception stack trace ---

 at PX.Data.Update.DtObserver.AskHowToRecoverFromError(Exception ex)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.RunSingleTask(TransferTableTask task)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.fetchAndRunNextTask()

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.StartSync()

 at PX.Data.Update.DtObserver.AskHowToRecoverFromError(Exception ex)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.StartSync()

 at PX.Data.Update.PXSnapshotUploader.UploadSnapshot(ZipArchiveWrapper zip, Point point, FileFormat readFormats, FileFormat writeFormats)

 at PX.Data.Update.PXSnapshotUploader.<>c__DisplayClass6_0.<Start>b__0()

 at PX.Data.Update.DatabaseLock.DatabaseOperation(PXDatabaseProvider provider, Action act, Boolean lockDB, Boolean disableFullText)

 at PX.Data.Update.PXSnapshotUploader.Start()

 at PX.Concurrency.CancellationIgnorantExtensions.RunWithCancellationViaThreadAbort(Action method, CancellationToken cancellationToken)

 at PX.Concurrency.CancellationIgnorantExtensions.<>c__DisplayClass1_0.<ToCancellationViaThreadAbort>b__0(CancellationToken cancellationToken)

 at PX.Concurrency.Internal.PXLongOperationPars.PopAndRunDelegate(CancellationToken cancellationToken)

 at PX.Concurrency.Internal.RuntimeLongOperationManager.PerformOperation(PXLongOperationPars p)

 

 

 

 

 

 

Best answer by MichaelShirk

Hi ​@gdewald  , 

Because the snapshots export as a zip file, it will be smaller than the actual size of an individual xml table file (GLTran in your case) inside the snapshot. Don't ask me how this is possible, but that's how it is!

What you need to do is open the snapshot folder, sort by Size descending, then you'll see the files with the largest actual size at the top. 

Snapshot is only 1,022,642 KB

 

Files in the snapshot are actually much larger

Modify your web.config file to handle this size, which again, will be larger that what you'll see when looking at the compressed snapshot size. 

This should fix the issue and allow you to restore the snapshot.

View original
Did this topic help you find an answer to your question?

6 replies

MichaelShirk
Captain II
Forum|alt.badge.img+4
  • Captain II
  • 132 replies
  • Answer
  • February 19, 2025

Hi ​@gdewald  , 

Because the snapshots export as a zip file, it will be smaller than the actual size of an individual xml table file (GLTran in your case) inside the snapshot. Don't ask me how this is possible, but that's how it is!

What you need to do is open the snapshot folder, sort by Size descending, then you'll see the files with the largest actual size at the top. 

Snapshot is only 1,022,642 KB

 

Files in the snapshot are actually much larger

Modify your web.config file to handle this size, which again, will be larger that what you'll see when looking at the compressed snapshot size. 

This should fix the issue and allow you to restore the snapshot.


  • Author
  • Freshman I
  • 9 replies
  • February 19, 2025

@MichaelShirk Oh wow, I didn’t even notice that. Thank you Michael, I will give this a go!


MichaelShirk
Captain II
Forum|alt.badge.img+4
  • Captain II
  • 132 replies
  • February 19, 2025

@gdewald Also, I’ll be making a post about this, but I’ve discovered a much quicker way to update the data in my local instance. 
In short, I restore a production database snapshot to a new database on my local server, then I use the ERP Configuration wizard to change the database of my dev site and choose that newly created database. It only takes minutes and you don’t have to deal with snapshot size limits.


  • Author
  • Freshman I
  • 9 replies
  • February 19, 2025

@MichaelShirk That’s fantastic! I'm looking forward to seeing it! That would make things so much easier.


Forum|alt.badge.img

Good find Michael!  I would have never thought to look within the file.


MichaelShirk
Captain II
Forum|alt.badge.img+4
  • Captain II
  • 132 replies
  • February 20, 2025

@travislawson  Yeah I’m not sure why that part is not included in the original post that explains how to modify the web.config file to allow for larger snapshots.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings