Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
Clear All
new posts

    Performance impact of high number of data source fields

    Hello,

    I have a question on design choice for a DataSource which expects a big number of elements in the server response.

    We have a custom legacy backend which returns and expects XML data which we have implemented DataSource successfully. We use the backend to read and write data from a SmartGWT based app. Typically, requests and responses contain 20-30 elements. Now, we have a situation where the backend sometimes returns (and expects as a response) quite a big number of (dynamic) fields, let's say 150 fields or so. Most often these fields are not present in the data but sometimes they are.

    As a design choice we have two options:

    1) We can just simply add these fields to the relevant DataSources. Even as they are quite dynamic we know the names of the elements. For example

    DataSourceTextField field1 = new DataSourceField("field1");
    DataSourceTextField field1 = new DataSourceField("field2");
    DataSourceTextField field1 = new DataSourceField("field3");
    ... 150 or so of these and
    setFields(..., field1, field2, field3, ...);

    And just roll with it.

    2) We can implement a wrapping "key-value-pair" typed child-DataSource and use a "multiple"-typed single field in the parent DataSource to parse these extra fields to a "Map" of some kind. This way we wouldn't have a huge number of fields in the parent DataSource (see 1 above).

    My question basically is that for simplicity, we would prefer to use the option 1) but we are concerned about the performance overhead it might have especially in the situations when the fields are not present at all in the Records or in the XML data returned from the server. From the internals of SmartGWT, does a big number (let's say 200 or so) fields in the DataSource cause overhead in server comm or data binding logic? We understand that when the attributes really are present in the data, they, of course, cause the performance hit that is normally expected but when for example only 10 of the 200 fields are really present (in the data & records) does the declaration of 190 additional fields in the DataSource cause overhead in the processing?

    Thanks.,
    Marko



    #2
    When the field values are not actually present, overhead will be minimal. You should definitely go with Option 1.

    Comment


      #3
      Excellent, thanks for the quick response.

      Comment

      Working...
      X