Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
Clear All
new posts

    SmartGWT blocking my DB

    I have a big problem with DMI-Methods and I hope you can help me with this. I have this problem since a long time ago, but only now I am able to reproduce it better.

    I am having (sometimes) blocked resources in my MSSQL Database when executing statements inside a DMI Method.
    This is NOT ALWAYS happening, so sometimes I have to execute the same method multiple times until the DB blocks.

    Basically, I am doing the following steps in my DMI Method:

    1) delete all records from the current user from a table
    2) insert some records to the table
    3) read those records, and do something with them

    This is my (simplified) DMIMethod:
    Code:
    RPCManager rpcManager = dsRequest.getRPCManager();
    		Map<String, Object> values = dsRequest.getValues();
    		List<Map<String, Object>> toUpdate = (List) values.get("toUpdate");
    		
    		Long userId = dsRequest
    				.getHttpServletRequest().getSession().getAttribute("userId");
    
    		DSRequest deleteReq = new DSRequest(
    				"tmpAddressAenderungen",
    				DataSource.OP_REMOVE, rpcManager);
    		deleteReq.setAllowMultiUpdate(true);
    		deleteReq.setCriteria("f_user_id", userId);
    		deleteReq.execute();
    
    		for (Map<String, Object> m : toUpdate) {
    			DSRequest insertReq = new DSRequest(
    					"tmpAddressAenderungen",
    					DataSource.OP_ADD, rpcManager);
    			Map<String, Object> valuesToInsert = new HashMap<>();
    			valuesToInsert.put("f_type", Type.UPDATE.name());
    			valuesToInsert.put("f_field_f_id", m.get("f_id"));
    			valuesToInsert.put("f_field_f_strasse", m.get("f_strasse"));
    			valuesToInsert.put("f_field_f_plz", m.get("f_plz"));
    			valuesToInsert.put("f_field_f_ort", m.get("f_ort"));
    			valuesToInsert.put("f_field_f_ortsteil", m.get("f_ortsteil"));
    			valuesToInsert.put("f_field_f_land", m.get("f_land"));
    			insertReq.setValues(valuesToInsert);
    			insertReq.execute();
    		}
    
    DSRequest editAddressesReq = new DSRequest(
    				"tmpAddressAenderungen",
    				DataSource.OP_FETCH, rpcManager);
    		editAddressesReq.setOperationId("doUpdateAddresses");
    		editAddressesReq.execute();
    
    		return new DSResponse();
    So you see, I am first deleting all records for the current user, then I insert some records, and then I call "doUpdateAddresses". ALL ON THE SAME TABLE: "tmpAddressAenderungen"
    This is the table getting blocked!!!!

    In my "doUpdateAddresses", for testing, I only read all records one by one:
    Code:
    declare @c_f_field_f_id as integer, @c_f_field_f_strasse as varchar(250), @c_f_field_f_zusatz as varchar(250), @c_f_field_f_plz as varchar(250), 
    				@c_f_field_f_ort as varchar(250), @c_f_field_f_ortsteil as varchar(250), @c_f_field_f_land as varchar(2);
    
    				declare c_cursor cursor for 
    					select f_field_f_id, f_field_f_strasse, f_field_f_zusatz, f_field_f_plz, f_field_f_ort, f_field_f_ortsteil, f_field_f_land
    					from t_tmp_address_aenderungen
    					where f_type = 'UPDATE' and
    					f_mandant = @mandantId and f_user_id = @userId;
    				open c_cursor
    				FETCH NEXT FROM c_cursor
    				INTO @c_f_field_f_id, @c_f_field_f_strasse, @c_f_field_f_zusatz, @c_f_field_f_plz, @c_f_field_f_ort, @c_f_field_f_ortsteil, @c_f_field_f_land;
    
    				WHILE @@FETCH_STATUS = 0
    				begin
    
    					update t_schueler set f_name  = '123' where f_schueler_id = 1;
    
    				
    				FETCH NEXT FROM c_cursor
    				INTO @c_f_field_f_id, @c_f_field_f_strasse, @c_f_field_f_zusatz, @c_f_field_f_plz, @c_f_field_f_ort, @c_f_field_f_ortsteil, @c_f_field_f_land;
    				
    				end
    				
    				close c_cursor;
    				deallocate c_cursor;
    So I am basically only reading these records one by one, using a cursor. In my testing application I am not doing anything else with those records.

    As I said, the table is getting blocked. Here is the blocking report:
    Code:
    <blocked-process-report monitorLoop="86592">
     <blocked-process>
      <process id="process3e8692ca8" taskpriority="0" logused="0" waitresource="KEY: 5:72057594064732160 (becef0eb17cd)" waittime="7212" ownerId="945557" transactionname="FETCH CURSOR" lasttranstarted="2015-03-06T20:44:56.683" XDES="0x3d6b6ba20" lockMode="S" schedulerid="1" kpid="884" status="suspended" spid="53" sbid="0" ecid="0" priority="0" trancount="0" lastbatchstarted="2015-03-06T20:44:56.680" lastbatchcompleted="2015-03-06T20:44:51.693" lastattention="1900-01-01T00:00:00.693" clientapp="Microsoft JDBC Driver for SQL Server" hostname="vmd6271" hostpid="0" loginname="test" isolationlevel="read committed (2)" xactid="945557" currentdb="5" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
       <executionStack>
        <frame line="21" stmtstart="1800" stmtend="2152" sqlhandle="0x02000000e76ad902a5ee95e8bad6c48297e71a34087072a80000000000000000000000000000000000000000"/>
       </executionStack>
       <inputbuf>
    set nocount on;
    				
    				declare @mandantId as integer;
    				declare @userId as integer;
    				set @mandantId = 1;
    				set @userId = 47;
    				
    				delete from t_tmp_map_local_remote_address where f_user_id = @userId and f_mandant = @mandantId;
    				
    				----------------------------------------------------------------------
    				--updates
    				declare @c_f_field_f_id as integer, @c_f_field_f_strasse as varchar(250), @c_f_field_f_zusatz as varchar(250), @c_f_field_f_plz as varchar(250), 
    				@c_f_field_f_ort as varchar(250), @c_f_field_f_ortsteil as varchar(250), @c_f_field_f_land as varchar(2);
    
    				declare c_cursor cursor for 
    					select f_field_f_id, f_field_f_strasse, f_field_f_zusatz, f_field_f_plz, f_field_f_ort, f_field_f_ortsteil, f_field_f_land
    					from t_tmp_address_aenderungen
    					where f_type = &apos;UPDATE&apos; and
    					f_mandant = @mandantId and f_user_id = @userId;
    				open c_cursor
    				FETCH NEXT FROM c_cursor
    				INTO @c_f_field_f_id, @c_f_field_f_strasse, @c_f_field_f_zusatz, @c_f_field_f_plz, @c_f_field_   </inputbuf>
      </process>
     </blocked-process>
     <blocking-process>
      <process status="sleeping" spid="54" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2015-03-06T20:44:56.677" lastbatchcompleted="2015-03-06T20:44:56.677" lastattention="1900-01-01T00:00:00.677" clientapp="Microsoft JDBC Driver for SQL Server" hostname="vmd6271" hostpid="0" loginname="test" isolationlevel="read committed (2)" xactid="945552" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
       <executionStack/>
       <inputbuf>
    SELECT t_tmp_address_aenderungen.f_id, t_tmp_address_aenderungen.f_mandant, t_tmp_address_aenderungen.f_type, t_tmp_address_aenderungen.f_field_f_id, t_tmp_address_aenderungen.f_field_f_strasse, t_tmp_address_aenderungen.f_field_f_zusatz, t_tmp_address_aenderungen.f_field_f_plz, t_tmp_address_aenderungen.f_field_f_ort, t_tmp_address_aenderungen.f_field_f_ortsteil, t_tmp_address_aenderungen.f_field_f_land, t_tmp_address_aenderungen.f_field_local_id, t_tmp_address_aenderungen.f_field_f_owner_id, t_tmp_address_aenderungen.f_field_f_adresse_id, t_tmp_address_aenderungen.f_field_f_owner, t_tmp_address_aenderungen.f_field_f_schueler_id, t_tmp_address_aenderungen.f_field_f_datum, t_tmp_address_aenderungen.f_field_f_old_address, t_tmp_address_aenderungen.f_field_f_new_address, t_tmp_address_aenderungen.f_field_f_typ, t_tmp_address_aenderungen.f_user_id FROM t_tmp_address_aenderungen WHERE (((t_tmp_address_aenderungen.f_id = 1004 AND t_tmp_address_aenderungen.f_id IS NOT NULL)) AND (t_tmp_address_aenderungen.f_mandan   </inputbuf>
      </process>
     </blocking-process>
    </blocked-process-report>
    So the blocked process is the process creating my cursor, and the blocking process is a select statement. But this blocking select statement is automatically created by smartGWT (after executing the INSERT statement!!!).
    Of course, I could disable this fetch using canSyncCache=false, but since I have observed this blocking in several situations, I would like to correct this problem.

    WHen digging into the blocked process, I see that the blocked statement is:
    Code:
    FETCH NEXT FROM c_cursor
    				INTO @c_f_field_f_id, @c_f_field_f_strasse, @c_f_field_f_zusatz, @c_f_field_f_plz, @c_f_field_f_ort, @c_f_field_f_ortsteil, @c_f_field_f_land;
    and, as I said, the blocking process is the fetch automatically created by SmartGWT after the insert.

    So is there a bug here? I have observed this problem since more than a year, but I never was able to reproduce it, until now.

    Using smartGWT 5.0p power 2015-02-27 with MSSQL Server 2014.

    #2
    Since all requests use the same RPCRequest, I would expect all of them in the same transaction, so they cannot block each other.
    But what happens with the fetches automatically created by SmartGWT? Are they in a separate transaction?

    Comment


      #3
      After setting canSyncCache="false", I still observe process blocks:
      The report:
      Code:
      <blocked-process-report monitorLoop="689">
       <blocked-process>
        <process id="process274b2a108" taskpriority="0" logused="276" waitresource="KEY: 5:72057594065518592 (8ecfe6af5078)" waittime="6969" ownerId="37263" transactionname="implicit_transaction" lasttranstarted="2015-03-07T00:19:45.970" XDES="0x27f680570" lockMode="S" schedulerid="8" kpid="2236" status="suspended" spid="53" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2015-03-07T00:19:45.970" lastbatchcompleted="2015-03-07T00:19:42.487" lastattention="1900-01-01T00:00:00.487" clientapp="Microsoft JDBC Driver for SQL Server" hostname="Arbeit" hostpid="0" loginname="test" isolationlevel="read committed (2)" xactid="37263" currentdb="5" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128058">
         <executionStack>
          <frame line="21" stmtstart="1798" stmtend="2150" sqlhandle="0x0200000004566120a4e9be4110fc7af39049a37948fe22070000000000000000000000000000000000000000"/>
         </executionStack>
         <inputbuf>
      set nocount on;
      				
      				declare @mandantId as integer;
      				declare @userId as integer;
      				set @mandantId = 1;
      				set @userId = 1;
      				
      				delete from t_tmp_map_local_remote_address where f_user_id = @userId and f_mandant = @mandantId;
      				
      				----------------------------------------------------------------------
      				--updates
      				declare @c_f_field_f_id as integer, @c_f_field_f_strasse as varchar(250), @c_f_field_f_zusatz as varchar(250), @c_f_field_f_plz as varchar(250), 
      				@c_f_field_f_ort as varchar(250), @c_f_field_f_ortsteil as varchar(250), @c_f_field_f_land as varchar(2);
      
      				declare c_cursor cursor for 
      					select f_field_f_id, f_field_f_strasse, f_field_f_zusatz, f_field_f_plz, f_field_f_ort, f_field_f_ortsteil, f_field_f_land
      					from t_tmp_address_aenderungen
      					where f_type = &apos;UPDATE&apos; and
      					f_mandant = @mandantId and f_user_id = @userId;
      				open c_cursor
      				FETCH NEXT FROM c_cursor
      				INTO @c_f_field_f_id, @c_f_field_f_strasse, @c_f_field_f_zusatz, @c_f_field_f_plz, @c_f_field_f   </inputbuf>
        </process>
       </blocked-process>
       <blocking-process>
        <process status="sleeping" spid="54" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2015-03-07T00:19:45.967" lastbatchcompleted="2015-03-07T00:19:45.967" lastattention="1900-01-01T00:00:00.967" clientapp="Microsoft JDBC Driver for SQL Server" hostname="Arbeit" hostpid="0" loginname="test" isolationlevel="read committed (2)" xactid="37261" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
         <executionStack/>
         <inputbuf>
      INSERT INTO t_tmp_address_aenderungen (f_field_f_datum, f_field_f_new_address, f_field_f_old_address, f_field_f_schueler_id, f_field_f_typ, f_mandant, f_type, f_user_id) VALUES (&apos;2015-03-07T00:19:45&apos;, &apos;abc, &apos;, &apos;&apos;, 11397, &apos;ELTERN2&apos;, 1, &apos;LOG&apos;, 1) select SCOPE_IDENTITY() AS GENERATED_KEYS   </inputbuf>
        </process>
       </blocking-process>
      </blocked-process-report>
      So now the inserting statement is blocking ....
      What is happening here ?

      Comment


        #4
        The INSERT never seems to return, since it continues to block the SELECT until I restart tomcat ..

        Comment


          #5
          Just to keep you looped in - this issue is currently being looked at and we'll follow up when we have information for you.

          Regards
          Isomorphic Software

          Comment


            #6
            The INSERT never seems to return, since it continues to block the SELECT until I restart tomcat ..
            Could you elaborate on the database/table design as to any hints used. Does the table have any foreign keys etc. What does your datasource ds.xml file look like?

            Not sure how valid this is but from what I can tell cursors are not allowed to read uncommitted rows in a transaction.

            Have you tried the following...

            1. Not using a cursor to see if the cursor is causing the problem
            2. Changing the cursor to be either STATIC or FAST_FORWARD?

            Regards,
            Isomorphic Software

            Comment


              #7
              I seem to be having a similar issue then edulid. Though I'm not using a cursor.

              http://forums.smartclient.com/showthread.php?t=32494
              Last edited by andrerichards; 18 Mar 2015, 00:29. Reason: Added link

              Comment


                #8
                Yes, I think this issue may be similar to andrerichards issue.
                I am experiencing this issue with *and without* cursors.

                I previously had a DMI Method, that only used DSRequest/DSResponses (server-side), and I had calls from the people in production, telling me that the DB had crashed. I wasn't able to solve the problem (nor to reproduce it) until I restarted the Tomcat server and sometimes even the DB.

                I then changed the DMI Method to be completely executed in T-SQL and since then I have not experienced this DB blocking. So something is wrong with the SmartGWT DMIs. For complex operations I am not able to use the DMI, but I have to use directly T-SQL. This should not be the case.

                I think this isn't directly related to cursors. (although I will look at Isomorphic suggestions).
                It is just very difficult to reproduce. I was able to reproduce it with cursors, that's why I posted this example.
                But I repeat, this happens with AND WITHOUT cursors (only having DMI DSRequests/DSResponses). And this happens with 5.0p and 4.1p power. I haven't tested with 3.0.

                Comment


                  #9
                  @ OP
                  Found that by setting autoJoinTransactions in the server.properties to true fixes the issue for me.

                  sql.SQLServer.autoJoinTransactions: true

                  Comment


                    #10
                    Thanks, I will try this.

                    @Isomorphic, could you please confirm that the lack of this setting may be blocking the DB?

                    Comment


                      #11
                      And further: is this a bug?
                      Shouldn't queries that should run in the same transaction (i.e. having the same rpcManager) run using the same connection and transaction, even if this setting is turned off?

                      Comment


                        #12
                        The short answer is: it depends. As the framework is shipped, the "autoJoinTransactions" setting is "ANY_CHANGE", which means updates, adds and deletes in a single queue with a shared RPCManager will form a single transaction; the automatic cache-sync queries we run will also join this transaction, but regular fetch operations will not. This is all documented here

                        So the change described by andrerichards should not be necessary, unless you have changed or somehow overridden the autoJoinTransactions setting - as you will see by following the link above, there are many ways to do this, globally, or specifically for SQL Server, or for specific DataSources, operationBindings or even individual requests.

                        We have tried to reproduce the problem you are seeing - specifically, the problem that two DSRequests, run from a single server method and affecting the same row, end up using different database connections and thus causing a deadlock - but we are unable to do so with the default config shipped with SmartGWT.

                        Therefore, unless you know of a change you have made to autoJoinTransactions setup which would explain this, please share your .ds.xml files, your server.properties (and project.properties if you have one, and modified framework.properties if you have modified it) and the complete server classes that demonstrate the problem. If you don't want to post these on the forums, please zip them up and email to support@isomorphic.com

                        @andrerichards: likewise, unless you have made config changes which would explain why you had to override sql.SQLServer.autoJoinTransactions to true, if would be helpful if you could also provide the details we ask for above.

                        Regards,
                        Isomorphic Software Support

                        Comment


                          #13
                          Hi Isomorphic,

                          I haven't changed the autoJoinTransactions setting anywhere, so I should have the default setting "ANY_CHANGE".

                          As you described, updates, adds and deletes seem to use the same connection, but fetches not. I don't quite understand what is the reasoning of not including fetches into the transaction.

                          If I have a batch insert, and immediately I want to fetch the inserted rows, I need these two operations to be in the same transaction, so that I really get the inserted rows in my fetch. Or do I understand something wrong?

                          Originally posted by Isomorphic View Post

                          We have tried to reproduce the problem you are seeing - specifically, the problem that two DSRequests, run from a single server method and affecting the same row, end up using different database connections and thus causing a deadlock - but we are unable to do so with the default config shipped with SmartGWT.
                          The first request should be an insert, the second a fetch of the inserted row. If I understand correctly your explanation, under the default smartgwt settings, these should use different connections, since one is an insert, and the other a fetch (which doesn't join the transaction). Can you please confirm if my reasoning is correct?

                          If yes, I should change the setting to "ALL" instead of "ANY_CHANGE", I think. What are the pros/contras and problems of having the setting "ALL"? Why isn't this the default setting?

                          Comment


                            #14
                            Yes, your reasoning is correct. The reason we don't have autoJoinTransactions: ALL out of the box is a desire to avoid introducing a transaction where it is not necessary and may impact performance. Any queue that consists of nothing but fetch operations falls into this category, but the most obvious example of when a transaction is unnecessary is a queue that contains just a single fetch operation. This is obviously a very common occurence, so we avoid adding unnecessary potential overhead. Of course, there are cases when a transaction is necessary to the correct operation of a fetch, but these are comparatively rare for most applications.

                            If you make the change to ALL and discover that it has an adverse performance impact on your plain fetch requests, you may wish to consider using a transaction policy of FROM_FIRST_CHANGE. Note, as discussed in the autoJoinTransactions doc we linked you to earlier, you can set the server.properties property "autoJoinTransactions" to a transaction policy as well as true/false, so if you want to do this, you don't have to programmatically call setTransactionPolicy() on every RPCManager instance - you can still apply this change declaratively.

                            Comment


                              #15
                              Thanks for the explanation.

                              I think I will change the transaction policy for DMI methods that have fetches after inserts/updates/deletes.

                              I read the docs again and I don't understand one part:
                              https://www.smartclient.com/smartgwt...olicy%28int%29

                              Code:
                              ANY_CHANGE: Bundle all requests into a transaction, including leading fetches, if there is any change operation in the queue. This is the default.
                              Here you say that in this case even fetches will be included into the transaction.

                              In my example, where I have an insert, and then a fetch, according to this docs, they should all be in the same transaction.
                              But you said earlier:
                              Code:
                              the "autoJoinTransactions" setting is "ANY_CHANGE", which means updates, adds and deletes in a single queue with a shared RPCManager will form a single transaction; the automatic cache-sync queries we run will also join this transaction, but regular fetch operations will not.
                              Here you say that fetches are not included into this transaction.
                              Are the docs here incorrect/misleading? Or am I misunderstanding something ?

                              Comment

                              Working...
                              X