Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
Clear All
new posts

    #16
    Originally posted by Isomorphic
    Never mind, we tried it out.

    This is due to drawAllMaxCells as we indicated in post #5 of this thread.
    (...and you have also ruled it out in the same post, so instead of looking into this, I started inspecting my code, as you have suggested.)

    setDrawAllMaxCells(0) avoids the fetches.
    Thank you, great!

    It's unclear why this result in inefficiently sending 3 requests
    My real problem here is not inefficiency, but the fact that my code can not understand (and react to) what is happening.
    The first fetch can be recognized to be a result of a filtering, but the successive ones can not be, as far as I know.

    But now this is solved.

    something to do with hiding fields immediately before calling fetchData
    ,
    The same thing happens if I manually execute searches later on.

    we'll look into that.
    OK. In these cases (when you say you will look into something behind the scenes) what is the established process? Will you notify me when it is resolved, or should I check the nightlies periodically?

    Thank you for your help!
    Last edited by csillag; 13 Mar 2010, 02:14.

    Comment


      #17
      Originally posted by Isomorphic
      setDrawAllMaxCells(0) avoids the fetches.
      Well, it certainly helps, but still, there are situations where a filtering request might generate more then one fetch requests.
      (Although this might be a separate issue.)

      Please see the attached (modified) test case. (The DS XML and dataset I have sent previously still apply.)

      Short explanation:

      I load a DS, bind it to the ListGrid, fetch the data. Then
      - Press "execute test step 1" to scroll down into the data, to record #3000. (Or you can do it manually.)
      - Press "execute test step 2" to set a criteria. (Or you can do it manually, by entering "23" in the filter box.)

      Result:

      Two fetch requests are generated:
      - One for "startRow":953, "endRow":1000. (Response: totalRows:176, startRow:176, endRow:176)
      - One for "startRow":0, "endRow":75. (Response: totalRows:175, startRow:0, endRow:75)

      If I would have to guess, I would say that this is because I was well into the data before doing the search, so the grid figures that after filtering, it should be somewhere in the middle of the data, and not at the start, therefore asks for data from ... where? at 1/3 of the original record number? This is a heuristics for the expected record numbers, after filtering?

      Anyway, this is still no good; how can I make it do only one fetch for each filtering?

      Thank you!

      UPDATE: since this only happens when the user is not at the start of the dataset when doing the filtering, I can avoid this when doing the filtering from code simply by going to the start first. However, I can not do this when the user is doing the search manually. One relatively easy solution would be for you to add a hook/callback function/event handler to when a filtering action is going to start. (Just like when editing is going to start, etc.) Then I could prepare for it, for example by going to the start. Or, if I have an other hook/callback function/event handler at the end of the filtering action, too, then this would make the whole auto-detection thing unnecessary.

      UPDATE: not need to go to 3000: the effect already appear if I scroll down to row #40 before the search.
      REQ1: [9,84] RESP1: [9,84], total is 175
      REQ2: [0, 42] RESP2: [0,42]

      So, the second request happen even if the first request _could_ get all the 75 rows it wanted - I guess it figures out from the total number of rows that it should fetch record from the start instead.

      What I do not understand is why the two requests ask for intersecting regions? (Both contain [9,42] )

      And, the most important bit, how do I avoid, or at least detect this effect? (I was hoping that the second request only happens f the first response does not contain enough information (because trying to read past the end of records), but this does not seem to be the case.)
      Attached Files
      Last edited by csillag; 13 Mar 2010, 09:15.

      Comment


        #18
        Taking a step back, it looks like you are tying correct behavior of your application to getting exactly one fetch per filter, and this isn't a good idea. From the perspective of general use of the framework, eliminating these extra fetches is a minor optimization for an edge case. An issue like this would not be considered a release blocker.

        Comment


          #19
          I am not saying that this is a release blocker for the framework; I am only saying that this makes what I want to do very hard (if not impossible.)

          Specifically, I need to be able to tell apart at least three kinds of data arrive events:
          - ones generated because of the user is scrolling
          - ones generated because the user changes the filter
          - ones generated because the data itself has changed.

          Since the API does not exposes this information, all I can do is to use various roundabout methods to detect what happened - like comparing the current filter to a previous, saved value. But I can not do this, when there are multiple data arrives caused by the same filter change, because in this case, only the first one is recognized as a filter change.

          Anyway, for now, I am trying to work around this an other way.

          Comment

          Working...
          X