Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
Clear All
new posts

    12.0p long Batch Uploader uploads problems / best practice

    Hi Isomorphic,

    we just realized we have a big problem with long running Batch Uploader uploads.
    For uploads on the client I do see a Queue (for button "Commit", Button "Upload" is working fine and read-only). On the server this queue is worked in a single transaction.
    The problem here is that the INSERTs in this single transaction block the system for other users (many tables and indexes) involved. As we don't have a schema for every customer, this even blocks other customers.

    I do see a few possible mitigations and fixes here:
    1. Split schema per customer - something we want to do soon. This will make the blocking only affect the current customer.
    2. Commit after every row of the BatchUploader-Queue. IMHO we'll be able (did not investigate yet) to do this, as we have access to the transaction inside the Add-DMI via the rpcManager.
    Do you have any best practices here, especially regarding #2?

    Thank you & Best regards
    Blama

    #2
    You can set the transaction policy, as you already seem to be aware, to cause each row to be committed separately.

    As far as the question of how to commit a bunch of rows without blocking other users, that's a general database administration question, there wouldn't be any SmartGWT-specific best practices here.

    Comment


      #3
      Hi Isomorphic,

      agreed. transactionPolicy does not work for me, as every BatchUploader row will result in multiple inserts (base table + n:m-table, where I generate the entries from splitting comma-separated strings).
      But this is easily solved with DSTransaction and manual commit();

      A (minor) problem here is that the RPCManager issues a rollback() and the Queue is marked as failed and all requests in the queue are marked as failed, if a single request fails. Of course validation upfront should ensure that no request will actually fail, but you never can be sure.

      Therefore, I have an enhancement suggestion here:
      Example, 100 rows in BU:
      • As of today, even if there is an error in row1, all 99 other rows will be added as well, then a big rollback is done.
        • This isn't the best behavior, BUT please don't change it without the option to retain it - I rely on it as I run the commit() now after each row.
      • If you do a change here like an option "autoCommitAfterEachRow" or the change suggested above, then only requests that did fail *themselves* should be marked as failed and not *all* requests, as the former will be committed anyway. Then also these committed rows should be removed from the BU grid and others should stay.
        This is basically like in the current "retain" mode, that does not send rows with validation errors and keeps them for fixing and committing later.
      This is minor for me, but might help preventing long-running transactions in the first place, which is mandatory for many OLTP-systems and therefore might help many of your customers.

      Best regards
      Blama

      Comment


        #4
        You seem to be saying that when automatic transaction handling is not in play, we mark all requests as failed anyway? That would simply be a bug, but we would need to see a test case for that.

        Comment


          #5
          Hi Isomorphic,

          yes, that is what I'm saying (using v12.0p_2019-06-25). Please see the testcase here with these changes:

          EmployeesUpload.java:
          Code:
          package com.smartgwt.sample.server.listener;
          
          import javax.servlet.http.HttpServletRequest;
          import javax.servlet.http.HttpServletResponse;
          
          import com.isomorphic.datasource.DSRequest;
          import com.isomorphic.datasource.DSResponse;
          import com.isomorphic.datasource.DSTransaction;
          
          public class EmployeesUpload {
              public DSResponse add(DSRequest dsRequest, HttpServletRequest servletRequest, HttpServletResponse servletResponse) throws Exception {
                  DSTransaction dsTransaction = null;
                  if (dsRequest.getRPCManager() == null || dsRequest.getRPCManager().getDsTransaction() == null) {
                      dsTransaction = new DSTransaction();
                      dsTransaction.setUserId(dsRequest.getUserId());
                      dsTransaction.setClientRequest(false);
                  } else
                      dsTransaction = dsRequest.getRPCManager().getDsTransaction();
          
                  if ("Jeff Doe2".equals(dsRequest.getValues().get("Name")))
                      return new DSResponse().setFailure("Jeff Doe2 is not allowed here");
          
                  DSResponse addResponse = dsRequest.execute();
                  if (!addResponse.statusIsError())
                      dsTransaction.commit();
                  return addResponse;
              }
          }
          TestdataImportEmployees.csv:
          Code:
          Name,ReportsTo,Gender,EmployeeId
          Anna Sun2,Charles Madigen,female,2000
          Mike Sun2,Tamara Kane,male,2001
          Jeff Doe2,Tamara Kane,male,2002
          John Doe2,Tamara Kane,male,2003

          The "-10" should be 0 here, IMHO.
          Best, it would also be this way when not using the manual "COMMIT"-logic I have in EmployeesUpload.java.

          Click image for larger version

Name:	Add-Queue.PNG
Views:	125
Size:	22.5 KB
ID:	258382

          Best regards
          Blama

          Comment

          Working...
          X