No announcement yet.
  • Filter
  • Time
Clear All
new posts

    Question about converting JSON to Record with a custom DataSource

    SmartClient Version: v10.0p_2015-06-02/LGPL Development Only (built 2015-06-02)

    I have my own custom data source (DSProtocol.CLIENTCUSTOM). When handling the FETCH operation I have a server call that is returning JSON data. I the routine I want to call processResponse and it seems like I need a DSResponse. It also seems like the correct thing to do is fill in the data in the response by calling setData with a list of ListGridRecord objects. My problem is how to convert from the JSON data to an array of ListGridRecords. I've written my own code to do this conversion using a combination of JSON.decode and XMLTools.selectObjects, but when measuring the performance of my code that does this, it is significantly slower than when I don't use CLIENTCUSTOM (i.e., using the built-in code to do the conversion). Is there a way I can call a method to leverage the built-in code to do this conversion for me instead of needing to write me own?

    The first question worth considering is whether you really need to use client-custom protocol at all.
    If you haven't already, take a good look over the "Client Data Integration" documentation topic. This describes how to build a dataSource which will issue requests and consume XML or JSON responses from a server, including details on how to configure essentially every aspect of the request and the expected response format.

    If you make use of this, the SmartGWT library handles performing the JSON -> ListGridRecord object conversion for you, and the performance should match that of the standard REST DataSource, etc.

    There may be some reason why this is definitely not an option for you (if so, we would be interested in what that is - there may be some configuration options for the DataSource request/response handling flow you've overlooked).
    Regardless - if client-custom is a must - before we theorize about the cause of performance woes, perhaps you could show us a sample JSON response (with sensitive data stripped out, of course) and your parsing code and we'll take a look.

    Isomorphic Software


      If you really want the details of why, you can try wading through the lengthy and confusing discussion from earlier on the forum: The short answer is that we receive server events about data changes that we want to use to update the data source (add/update/remote). Since the events reflect data changes that already occurred on the server, we do not want the data source updates to do things like try to POST and update back to the server again. In fact, our server REST interface only supports a GET (adds, updates, removes) are all done using separate targeted APIs separate from the REST GET.

      If there is another way to update the data source with new data (and have it not try to do POST/PUT/DELETE calls) other than using CLIENTCUSTOM we can discuss that.

      Assuming CLIENTCUSTOM is appropriate, I was able to resolve my issue. I can use the standard handling for FETCH, it's the other operations that are the problem. Since the built-in fetch is faster than my code when it comes to doing the JSON to ListGridRecord conversion and since it is fine with me to not have to maintain my own custom conversion code, I am more than happy to find a way to use the standard FETCH implementation. I found I was able to get what I wanted by calling setOperationBindings and specifying DSProtocol.GETPARAMS for the FETCH and DSProtocol.CLIENTCUSTOM for ADD, UPDATE, and REMOVE. This seems to work fine for me, so unless you have a better suggestion, my problem is solved and I'll just stick with this implementation.
      Developing applications with Smart GWT, the Google Web Toolkit edition of SmartClient


        Well, we should reiterate that we suspect there are some areas of your application where there's still unnecessary complexity (beyond what we worked with you to uncover), but we can't really tell without a full Architecture Review.

        As far as this narrow problem, it makes a lot of sense to use DataSources to implement the operations of a clientCustom DataSource, in fact, we document this approach and call it the DataSource Facade pattern.


          So it sounds like (from looking at the DataSource Fašade pattern), that in answer to my original question, I could have set up everything as CLIENTCUSTOM (including the FETCH) and then used the standard FETCH behavior by doing the following in transformRequest:

          public Object transformRequest(final DSRequest dsRequest) {
              final DataSource superDS = DataSource.get(getInheritsFrom());
              return dsRequest.getData();
          Is that correct?

          Also I noticed a similar situation to mine documented under updateCaches where it talks about needing to update things based upon events coming in. Can you discuss the differences between processing my update events using updateCaches vs. doing an updateData call on my data source and using a CLIENTCUSTOM binding and calling processRequest in transformRequest when I get and updateRequest? Are there reasons to do it one way vs. the other?

          Finally, is there anyway with any of these methods to do a bulk update? I am getting events that cause me to need to update potentially 100's of records in a very short amount of time and when the grid has 1000's of records in it and I do 100's of individual updates at a time, the browser locks up for quite a while before the display finally updates and I can click on things again. It appears that there is a lot of overhead with each update and that it might be lot faster if I could update more than one record at a time. With my current code, I am doing an updateData on the ListGrid, but the overloads only allow a single record to be updated at a time.

          I've tried this with IE11, FF, and Chrome - IE11 seems to be noticeably slower than the other two.
          Last edited by pgrever; 16 Sep 2015, 19:40.


            The code sample you copied shows how you can create a clientCustom DataSource that behaves exactly like another DataSource, which gives you a starting point to begin adding code to modify behavior in some way (eg, introduce delays for testing purposes). That's just one instance of the DataSource Facade pattern; other examples listed in the doc involve a clientCustom DataSource delegating to other DataSources that have totally different fields.

            We can't really advise on what's appropriate for you - too little context.

            As far as a comparison between updateCaches() and clientCustom - that doesn't make much sense (sorry). To restate the docs: updateCaches() is typically used when you have data updates that have occurred *through some action the user didn't initiate* (eg, another user). clientCustom is typically used when you need to implement server contact in a way that isn't built into DataSources, for example, integration with legacy GWT-RPC services, or to implement the DataSource facade pattern, for any of the reasons stated in the docs.

            It's true that you could use the clientCustom protocol to effectively re-implement updateCaches, by having your clientCustom DataSource just no-op when sent particular requests, responding immediately with data that is already present on the client. There does not seem to be a point to this - it's just extra work.

            As far as updating multiple records at once - if you can tell that, for example, 1000 records are about to updated, wholesale dataset replacement for a grid (eg grid.setData() with a new ResultSet) or just telling the grid to re-fetch from the server probably makes more sense than broadcasting each update individually via updateCaches().


              So given that the main case I have is events being generated by some action the user did NOT initiate (and in the case where the user did "initiate" it, it is done indirectly and still comes to the UI via an event the same as if someone else did it) it sounds like using updateCaches would be a better direction than using CLIENTCUSTOM.

              As far as the update of 1000 records, it doesn't seem to make much sense to me to use setData. I might have 10,000 out of 20,000 records fetched so far and I have to update 1,000 of them. I don't have the whole data set so I don't see how I can use setData. I can look at "re-fetching" again, but I thought that last time I tried that the user sees the grid clear, a Loading... message show up, and they lose their scroll position and selection. Any one of which is a problem. If updateCaches has similar behavior, then I need to stick with using CLIENTCUSTOM


                I don't have the whole data set so I don't see how I can use setData.
                Neither does the grid, of course. You would take the data the grid does have, apply updates, and give it back via setData().

                Or, if you are maintaining a separate cache, update that and have the grid re-fetch.

                Whether you setData() or trigger re-fetch, it's true that scroll position and selection will not be maintained. That's by design; there's no way for the grid to know that the scroll position and selection are actually still valid when what you've just done is entirely replaced the data set. But it is easy enough to capture the scroll position and selection, then re-apply it after new data has been supplied.


                  If the grid is paged and it knows that it has 1000 records out of a possible 2000 and I get the data the grid does have, apply the updates, and then set it back via setData(), does it still know there are 2000 records, show the vertical scrollbar appropriate and fetch more data if the user scrolls to a section that has not been fetched yet?  I as under the impression that calling setData would lose all of this.  Am I wrong?

                  It might be easy enough to restore scroll position and selection, but it is not a good solution.  I am getting around 10 updates per second continually with bursts that are much higher.  Trying to keep the grid up-to-date by re-loading, re-scrolling and re-selecting is not an acceptable solution.  Not only does the browser lock up while all this is going on, but the clear an re-draw of the grid is very disorienting to the user.  Even if we told the user we only update every 10 seconds and the data may be 10 seconds behind, doing this type of refresh every 10 seconds would get very annoying for the user.  If we lengthen the time interval, then it becomes an issue of the data on the screen being out of date for longer periods of time.  For our customer who are trying to process 10,000 jobs a day through their manufacturing process, this seems too slow.  They are already complaining about these kinds of delays which we are already seeing because we cannot process the updates fast enough trying to apply them one at a time.  I need a way to update 10/100 records very rapidly with the correct filtering being applied based upon the changes and reposition records according to the sort criteria.  I know it is possible to keep up with this in a browser because another team here is doing a UI on the same data and events using Angular grids and it is doing just fine.  Maybe it's just not possible using the SmartGWT ListGrid and DataSource classes.


                    See ResultSet.initialData for how to create a ResultSet that has a partial cache, and will perform fetches from then on.

                    Your concerns about refreshing delays with this approach are unfounded. Your issue with calling updateCaches() 1000s of times in a row is most likely related to work being repeated over and over which this different approach would avoid; done correctly, the process of refreshing the data in the way would not be perceptible to the user.

                    However, if you are trying to maintain sort direction and filtering criteria as well, those would be dropped by supplying a new ResultSet.

                    So probably the best thing to do is directly update the Records in the ResultSet, then call ResultSet.filterLocalData() which specifically mentions this usage, then call listGrid.markForRedraw().

                    This approach is fine if you are not using inline editing features, or certain other features which require the grid to be aware of each data change (eg auto-fitting). This is also the way that such problems are solved with more primitive grids, such as those found in Angular (which doesn't have features which would potentially conflict with a low-level approach like this).


                      We'll give this a try for our next release and see what the user experience is like.  We're trying to release a new version of our product next week and this would be too much of a change to experiment with at this time.  It is correct that we are not using in-line editing in any of our grids with real-time updates.


                        By the way, all of these posts keep showing up with a set of garbage characters after periods.  I don't know if you are seeing that as well.  I'm using IE11.


                          OK, two points to consider for future planning then:

                          1. updateCaches() updates any widget that uses the same DataSource. If you go the route of direct updates of grid data instead of calls to updateCaches(), you may have to consider also updating any forms or other widgets which are also bound to the same DataSource.

                          2. we've been just taking it as a given that updateCaches() is slow with enough updates. We don't have a test case for this; there could be some easily corrected issue here that doesn't involve a framework change.


                            Oh, and, we are aware of the forums issue and working on it. We needed to update the forums software for security and mobile usability reasons, and unfortunately they have included an atrociously buggy rich text editor that we have not yet figured out how to turn off system-wide.


                              Ok, thanks for the updates.  BTW, we currently always have a one-to-one relationship between a data source and a grid (we don't use data sources for anything else and we don't share them between two grids).