Skip to main content
Solved

Difficulty Importing Large Snapshots


Forum|alt.badge.img

I am working in Acumatica 24R2 Build 24.205.0015. Currently, I am trying to update my local instance to be more up to date with our live instance. I took a snapshot of my Local, using the “Exclude Wiki and Attachments” setting.  However, the snapshot file was larger than what is natively supported in Acumatica (My file size if 1.21GB). As a result, I followed the instructions from this link allow me to import larger snapshot files.

https://community.acumatica.com/maintenance-and-troubleshooting-229/how-to-increase-the-size-limit-for-a-snapshot-file-being-uploaded-150?tid=150&fid=229

 

Now, I am able to upload the snapshot into my local tenant, however I am unable to actually restore the snapshot to the instance. Every time I try, I run into the following error. 

“An error occurred while importing data into the 'GLTran' table.”

  With the error stated being “System.IO.IOException: Stream was too long.”. Below I will also post the trace log that occurs.
 

Since the snapshot method did not work, I tried the method recommended in this post:

https://community.acumatica.com/maintenance-and-troubleshooting-229/how-to-import-a-large-snapshot-using-the-acumatica-erp-configuration-wizard-387

 

Unfortunately, even this does not seem to work perfectly due a couple of reasons. There does not seem to be resolution in relation to the loss of data in Usr Fields. There is also a complication related to a customization created by my service provider when I use this method.

Do we have a way to reliably use and restore large snapshots?  I can’t imagine Acumatica would support the creation of a large snapshot if it would never be able to actually preform a restore with it.

 

Here is the trace for the error that occurs when importing the snapshot.

System.IO.IOException: Stream was too long.

 at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)

 at System.IO.Stream.InternalCopyTo(Stream destination, Int32 bufferSize)

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter..MoveNext()

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter.

.MoveNext()

 at PX.BulkInsert.Provider.TransferTableTask.Executor.[1].MoveNext()

 at PX.DbServices.Model.DataSet.PxDataRows.

.MoveNext()

 

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

System.Exception: Stream was too long. Table name: GLTran. File name: . Line number: 0. ---> System.IO.IOException: Stream was too long.

 at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)

 at System.IO.Stream.InternalCopyTo(Stream destination, Int32 bufferSize)

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter..MoveNext()

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter.

.MoveNext()

 at PX.BulkInsert.Provider.TransferTableTask.Executor.[1].MoveNext()

 at PX.DbServices.Model.DataSet.PxDataRows.

.MoveNext()

 

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

   --- End of inner exception stack trace ---

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

 at PX.DbServices.Points.DbmsBase.DbmsTableAdapter.WriteRows(IEnumerable`1 rows, Boolean exclusiveWrite, Action`1 transferObserver)

 at PX.BulkInsert.Provider.TransferTableTask.Executor.Start(DataTransferObserver observer)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.RunSingleTask(TransferTableTask task)

PX.Data.PXException: An error occurred while importing data into the 'GLTran' table. ---> System.Exception: Stream was too long. Table name: GLTran. File name: . Line number: 0. ---> System.IO.IOException: Stream was too long.

 at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)

 at System.IO.Stream.InternalCopyTo(Stream destination, Int32 bufferSize)

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter..MoveNext()

 at PX.DbServices.Points.ZipArchive.ZipTableAdapter.

.MoveNext()

 at PX.BulkInsert.Provider.TransferTableTask.Executor.[1].MoveNext()

 at PX.DbServices.Model.DataSet.PxDataRows.

.MoveNext()

 

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

   --- End of inner exception stack trace ---

 at PX.DbServices.Points.MsSql.MsSqlTableAdapter.BulkCopy(IEnumerable`1 rows, Boolean mayLockTable, ExecutionContext context, Action`1 transferObserver)

 at PX.DbServices.Points.DbmsBase.DbmsTableAdapter.WriteRows(IEnumerable`1 rows, Boolean exclusiveWrite, Action`1 transferObserver)

 at PX.BulkInsert.Provider.TransferTableTask.Executor.Start(DataTransferObserver observer)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.RunSingleTask(TransferTableTask task)

   --- End of inner exception stack trace ---

 at PX.Data.Update.DtObserver.AskHowToRecoverFromError(Exception ex)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.RunSingleTask(TransferTableTask task)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.fetchAndRunNextTask()

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.StartSync()

 at PX.Data.Update.DtObserver.AskHowToRecoverFromError(Exception ex)

 at PX.BulkInsert.Provider.BatchTransferExecutorSync.StartSync()

 at PX.Data.Update.PXSnapshotUploader.UploadSnapshot(ZipArchiveWrapper zip, Point point, FileFormat readFormats, FileFormat writeFormats)

 at PX.Data.Update.PXSnapshotUploader.<>c__DisplayClass6_0.<Start>b__0()

 at PX.Data.Update.DatabaseLock.DatabaseOperation(PXDatabaseProvider provider, Action act, Boolean lockDB, Boolean disableFullText)

 at PX.Data.Update.PXSnapshotUploader.Start()

 at PX.Concurrency.CancellationIgnorantExtensions.RunWithCancellationViaThreadAbort(Action method, CancellationToken cancellationToken)

 at PX.Concurrency.CancellationIgnorantExtensions.<>c__DisplayClass1_0.<ToCancellationViaThreadAbort>b__0(CancellationToken cancellationToken)

 at PX.Concurrency.Internal.PXLongOperationPars.PopAndRunDelegate(CancellationToken cancellationToken)

 at PX.Concurrency.Internal.RuntimeLongOperationManager.PerformOperation(PXLongOperationPars p)

 

 

 

 

 

 

Best answer by MichaelShirk

Hi ​@gdewald  , 

Because the snapshots export as a zip file, it will be smaller than the actual size of an individual xml table file (GLTran in your case) inside the snapshot. Don't ask me how this is possible, but that's how it is!

What you need to do is open the snapshot folder, sort by Size descending, then you'll see the files with the largest actual size at the top. 

Snapshot is only 1,022,642 KB

 

Files in the snapshot are actually much larger

Modify your web.config file to handle this size, which again, will be larger that what you'll see when looking at the compressed snapshot size. 

This should fix the issue and allow you to restore the snapshot.

View original
Did this topic help you find an answer to your question?

13 replies

MichaelShirk
Captain II
Forum|alt.badge.img+4
  • Captain II
  • 134 replies
  • Answer
  • February 19, 2025

Hi ​@gdewald  , 

Because the snapshots export as a zip file, it will be smaller than the actual size of an individual xml table file (GLTran in your case) inside the snapshot. Don't ask me how this is possible, but that's how it is!

What you need to do is open the snapshot folder, sort by Size descending, then you'll see the files with the largest actual size at the top. 

Snapshot is only 1,022,642 KB

 

Files in the snapshot are actually much larger

Modify your web.config file to handle this size, which again, will be larger that what you'll see when looking at the compressed snapshot size. 

This should fix the issue and allow you to restore the snapshot.


Forum|alt.badge.img
  • Author
  • Freshman I
  • 11 replies
  • February 19, 2025

@MichaelShirk Oh wow, I didn’t even notice that. Thank you Michael, I will give this a go!


MichaelShirk
Captain II
Forum|alt.badge.img+4
  • Captain II
  • 134 replies
  • February 19, 2025

@gdewald Also, I’ll be making a post about this, but I’ve discovered a much quicker way to update the data in my local instance. 
In short, I restore a production database snapshot to a new database on my local server, then I use the ERP Configuration wizard to change the database of my dev site and choose that newly created database. It only takes minutes and you don’t have to deal with snapshot size limits.


Forum|alt.badge.img
  • Author
  • Freshman I
  • 11 replies
  • February 19, 2025

@MichaelShirk That’s fantastic! I'm looking forward to seeing it! That would make things so much easier.


Forum|alt.badge.img

Good find Michael!  I would have never thought to look within the file.


MichaelShirk
Captain II
Forum|alt.badge.img+4
  • Captain II
  • 134 replies
  • February 20, 2025

@travislawson  Yeah I’m not sure why that part is not included in the original post that explains how to modify the web.config file to allow for larger snapshots.


  • Freshman I
  • 7 replies
  • April 14, 2025

We are having this issue as well.  I’m not sure if I’m just not modifying the web.config correctly or what, but it still won’t import.  I’d love to try the database switch.  That makes a lot more sense to me.  How would that work with our production being cloud and our QA being on prem?


  • Freshman I
  • 7 replies
  • April 14, 2025

It helps to modify the correct part of the config.  For reference:  

<httpRuntime maxRequestLength="1048576"/>

Still, my previous question stands.


valentynbeznosiuk
Jr Varsity I
Forum|alt.badge.img

Hi there,

Also, a good way to restore snapshots locally was described in this topic https://asiablog.acumatica.com/index.php/2017/12/restore-large-snapshot/


MichaelShirk
Captain II
Forum|alt.badge.img+4

  • Freshman I
  • 7 replies
  • April 15, 2025

@MichaelShirk, how do I get a copy of the db from prod from the hosted site?  The db restore is how we do a lot of prod to QA stuff, but the new paradigm has prod hosted instead of being on prem.  QA and dev are still on prem so that gives us a lot more flexibility locally.


MichaelShirk
Captain II
Forum|alt.badge.img+4

@doncarter  We’ve always hosted on prem, so I don’t know the answer to your question, or even if it’s possible. 
Perhaps someone else can answer or you can submit a support ticket?


  • Freshman I
  • 7 replies
  • April 15, 2025

@MichaelShirk, I pushed for a fully on prem setup for this reason among many others.  We are moving to an entirely hosted setup, and I haven’t learned the routines that go with that setup yet.  I think I have a support ticket in with out vendor to make this possible, but I haven’t gotten a response yet.  Either way, thanks for responding.  I’m hoping to be able to use this soon.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings