Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
Clear All
new posts

    Exceptions using BatchUploader on app engine data store

    I'm trying to upload a large batch of items using the BatchUploader class but I'm running into issues with more than 5 lines in the CSV.

    Here is the exception I'm getting.

    Code:
    java.lang.IllegalArgumentException: operating on too many entity groups in a single transaction.
    [INFO] 	at com.google.appengine.api.datastore.DatastoreApiHelper.translateError(DatastoreApiHelper.java:39)
    [INFO] 	at com.google.appengine.api.datastore.DatastoreApiHelper$1.convertException(DatastoreApiHelper.java:76)
    [INFO] 	at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:94)
    [INFO] 	at com.google.appengine.api.datastore.Batcher$ReorderingMultiFuture.get(Batcher.java:129)
    [INFO] 	at com.google.appengine.api.datastore.FutureHelper$TxnAwareFuture.get(FutureHelper.java:171)
    [INFO] 	at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:86)
    [INFO] 	at com.google.appengine.api.datastore.FutureHelper.getInternal(FutureHelper.java:71)
    [INFO] 	at com.google.appengine.api.datastore.FutureHelper.quietGet(FutureHelper.java:32)
    [INFO] 	at com.google.appengine.api.datastore.DatastoreServiceImpl.put(DatastoreServiceImpl.java:86)
    [INFO] 	at com.google.appengine.datanucleus.WrappedDatastoreService.put(WrappedDatastoreService.java:112)
    [INFO] 	at com.google.appengine.datanucleus.EntityUtils.putEntitiesIntoDatastore(EntityUtils.java:766)
    [INFO] 	at com.google.appengine.datanucleus.DatastorePersistenceHandler.insertObjectsInternal(DatastorePersistenceHandler.java:314)
    [INFO] 	at com.google.appengine.datanucleus.DatastorePersistenceHandler.insertObject(DatastorePersistenceHandler.java:218)
    [INFO] 	at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:2386)
    [INFO] 	at org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:3783)
    [INFO] 	at org.datanucleus.ObjectManagerImpl.flushInternalWithOrdering(ObjectManagerImpl.java:3888)
    [INFO] 	at org.datanucleus.ObjectManagerImpl.flushInternal(ObjectManagerImpl.java:3811)
    [INFO] 	at org.datanucleus.ObjectManagerImpl.flush(ObjectManagerImpl.java:3751)
    [INFO] 	at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java:4141)
    [INFO] 	at org.datanucleus.ObjectManagerImpl.transactionPreCommit(ObjectManagerImpl.java:428)
    [INFO] 	at org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java:398)
    [INFO] 	at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:287)
    [INFO] 	at org.datanucleus.api.jpa.JPAEntityTransaction.commit(JPAEntityTransaction.java:103)
    As soon as I get more than 5 entities it throws this exception. At this point I know the issue is coming from app engine. I verified with a simple servlet that just writes entities using JPA. So the problem is independent of SmartGWT. SmartGWT just encounters the issue when it calls into JPA.

    That seems to coordinate with the documentation: Transactions and entity groups

    What is the a solid workaround for this? I'm hoping to stick with the app engine data store and would prefer to keep transactions if possible.

    DataSource:
    Code:
    <DataSource
      ID="product_DataSource"
      serverConstructor="com.isomorphic.jpa.GAEJPADataSource"
      beanClassName="com.example.model.Product"
    >
      <fields>
        <field name="productId"    type="text" hidden="true"   primaryKey="true" />
        <field name="productNumber"  type="text" title="Product Number" required="true"   />
        <field name="upc" type="text" title="UPC" canEdit="false"   />
        <field name="description" type="text" title="Description" canEdit="false"   />
      </fields>
    </DataSource>
    persistence.xml
    Code:
    <?xml version="1.0" encoding="UTF-8"?>
    <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
      <persistence-unit name="ds" transaction-type="RESOURCE_LOCAL">
        <provider>org.datanucleus.api.jpa.PersistenceProviderImpl</provider>
        <class>com.example.model.Product</class>  
        <exclude-unlisted-classes>true</exclude-unlisted-classes>  
        <properties>
          <property name="datanucleus.ConnectionURL" value="appengine"/>
          <property name="datanucleus.NontransactionalRead" value="true"/>
          <property name="datanucleus.NontransactionalWrite" value="false"/>
          <property name="datanucleus.singletonEMFForName" value="true"/>
          <property name="datanucleus.appengine.datastoreEnableXGTransactions" value="true" />
        </properties>
      </persistence-unit>
    </persistence>

    #2
    You mention that you'd like to stick with the app engine data store, but this seems to be very straightforward evidence that it just doesn't meet your requirements..

    What we generally recommend with GAE is to use CloudSQL (setup instructions are here).

    Comment


      #3
      In general I wanted to use the data store but I don't have a hard requirement for it. I can see the benefits of *not* using it with smartgwt. I just wanted to double check that there wasn't a quick workaround. Thanks for the quick reply

      Comment


        #4
        Hi andyadams,

        I asked the same question here.

        @Isomorphic: That specific question is unanswered in the linked thread.

        @andyadams: I'll start developing my uploader use case Monday and let you know here or in the other thread which road I went and how it worked out.

        Best regards,
        Blama

        Comment


          #5
          The BatchUploader keeps uncommitted records in browser memory and does not place them in any kind of server-side storage (permanent or temporary) until the user has reviewed the data and pressed the "Commit" button, which then saves the records to the target DataSource.

          We're not following what question remains unanswered.

          Comment


            #6
            Hi Isomorphic,

            the functionality I (and most likely andyadams as well) is looking for is:
            1. Upload csv with 10 rows
            2. 5 OK, 5 with errors
            3. Hit commit
            4. 5 rows are transfered
            5. Remove errors from 2 of the 5 remaining rows
            6. Hit commit
            7. 2 rows are transfered
            8. Remove errors from the rest of the rows
            9. Hit commit
            10. Remaining 3 rows are transfered
            11. List is empty


            Best regards,
            Blama

            Comment


              #7
              Ah, thanks for clarifying.

              There isn't a built-in mode for this, but as we mentioned in the other thread, much of the non-trivial code involved in the BatchUploader is now available via the DataImport API, so you could build this kind of UI yourself.

              Alternatively you can use the Feature Sponsorship program to have this added as a built-in mode. This would be a relatively small one, as some of the underpinnings are already there (eg batchUploader.partialCommit).

              Comment

              Working...
              X