Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
Clear All
new posts

    Best practice: SaaS with SmartGWT

    Hi Isomorphic,

    can you share your best practice / advice on this one?
    We are making a SaaS based on SmartGWT. Right now, this means an (easy, because automated) build of ~30x 50 MB test and customers war files (takes 15min, only one run of gwtc needed) and upload of it (1,5 GB).
    This works, but is a tedious and annoying task that does not scale.


    So the idea is to move all the customers to 1 WAR. This means the following tasks:
    • All customer specific code must come from the DB
    • All customer specific "files" (e.g. HTML mail templates) must come from the DB
    • All customer specific field names must come from the DB
    • Application must use DB access that can access all data (before I used separate schemas), WHERE clause set in SQLDataSource
    • Login must use DB access that can access all users (before I used separate schemas)
    • Mail server must be dynamic per customer (see here, either by extension of the framework or by sending the mail oneself)
    • Internal code changes for stuff that assumed to get back data for only one tenant
    This is all possible IMHO. I also do see these three points:
    1. Make sure Dynamic DataSources (same name, different result per customer), which might get cached, work as expected
    2. Make sure Dynamic field title (this feature sponsorship) works as expected (same DataSource name, different result per customer)
    3. Make sure the new feature you built in 12.0 caches only for one click of "upload" (strongly assume so)

    Can you give you thoughts on 1/2/3? Do you see a problem here?
    I'm sure that these two things would solve all potential issues:
    1. Disable pooling like described here in server.properties OR
    2. Make ALL DataSources dynamic and add them a suffix of "_xxx", where xxx is the customer number. Then only use this numbered access and rewrite all ID, foreignKey and includeFrom (more?) accordingly in the generator.
    What way do you suggest. Am I missing something here? Do you have number on the performance penalty when disabling the pooling?
    Do you know of a customer / project doing a similar thing with the framework?

    Thank you & Best regards
    Blama

    #2
    This is a bit of an odd post.. your first question you’ve asked in another thread already, it will be answered there. The other two sound like vague unease as to whether features will work as documented. They will or we’ll fix them.

    Suffixing DataSources is a reasonable multi-home strategy we’ve used before. The alternative is of course having a customerId column. The latter makes it easier to do cross-customer reporting among other things. The two strategies can be mixed.

    Suffixing DataSources does not imply that DataSource pooling needs to be turned off, and we would not recommend doing so.

    Comment


      #3
      Hi Isomorphic,

      Originally posted by Isomorphic View Post
      Suffixing DataSources is a reasonable multi-home strategy we've used before. The alternative is of course having a customerId column. The latter makes it easier to do cross-customer reporting among other things. The two strategies can be mixed.
      Thanks. It's always good to know not to be the 1st one with a specific setup.
      I do also have a customerId column.

      Originally posted by Isomorphic View Post
      Suffixing DataSources does not imply that DataSource pooling needs to be turned off, and we would not recommend doing so.
      Sorry, misunderstanding here.
      I meant in order to solve the problem I could do either:
      1. Make all DS Dynamic and with suffix OR
      2. Leave all DS as they are and turn off pooling
      Does 1) Need more changes than rewriting ID and also foreignKey and includeFrom?
      Would 2) solve my issue (Some DataSources that already Dynamic would indeed be different per customer)? Is there some performance impact to expect?

      Thank you & Best regards
      Blama

      Comment


        #4
        We’re not sure how 2) would actually work to create per-customer data, unless you introduce customerId columns. And if you did that, there would again be no reason to turn off pooling.

        Comment


          #5
          Hi Isomorphic,

          I already do have customerId columns. The problem I see is the following:
          Depending on a setting that is per customer, I change the tableName of a DataSource.
          Now I if have a dynamic DataSource with ID "myDS_cust001" I can set the tableName as needed and everything is OK, even if pooled.
          If I only have a dynamic DataSource with ID "myDS" and I set the tableName like it is correct in that case (I can get the customer and the correct setting via the httpRequest's domain name), whatever customer 1st requests the DS will manifest it's configuration in the pool -> problem.
          But if I disable the pool and run the XML modification of RegisterDS every time, it will always be correct.

          I'm pretty sure the correct/suggested solution is to make all DS dynamic and suffix them, but I'd the other way is faster to develop and perfectly fine IMHO, if it does not have performance implications.

          Best regards
          Blama

          Comment


            #6
            Not following this. If you have different tableNames per customer you can go ahead and dynamically generated DataSources with customer-specific suffixing as previously discussed. This still does not imply turning off pooling. It also still does not imply that *every* DataSource must be suffixed, only those where you choose to have different tables per customer.

            Comment


              #7
              Hi Isomorphic,

              agreed. This only affects DataSources where something can be different depending on the customer.
              For these I can either suffix them and also modify all fields linking to them. But this "linking to them" is transitive -> most likely all DS need to be dynamic, then.
              Or I turn of pooling for generated DataSources off. Clear so far.

              Another question w.r.t this: Is the customized fmt-tag DataSource localization result cached per DataSource, meaning that I have to suffix all DataSources if I'm using that feature with different results per customer once 12.1 becomes a "p" version?

              Best regards
              Blama

              Comment


                #8
                As it happens, it's currently implemented as an entirely dynamic lookup. However, how could it matter? Are you saying different customers have different translations for the same string, even within the same locale? That seems nonsensical.

                Comment


                  #9
                  Hi Isomorphic,

                  yes, I fully agree, but unfortunately that's the way it is.
                  We are distributing leads. Depending on the industry the person taking a lead is:
                  • a "Reseller" (IT industry) (German: "Reseller", "Partner")
                  • a "Dealer" (HVAC, Sanitary, Automobile industry) (German: "Händler")
                  • a "Broker" (Insurance) (German: "Makler")
                  Depending on the customer a campaign can be named "campaign" or "source". A lead can be named "lead" or "inquiry". And so on and so forth.

                  In order to make transition as easy as possible for our customer and their partners (whatever they are called), we try to stay as close to the customer wording as possible. Also, often the persons using our software are not too fond of IT, so we have to do our best to be as flexible as possible and give as little reason as possible to object the change of process.

                  Best regards
                  Blama

                  Comment


                    #10
                    Those are not translations into a different language, they are industry-specific terminology. It would be a hack to use a system intended for localization to implement industry-specific terminology instead. It would break if, for example, we took the completely reasonable measure of caching translations per-locale.

                    Comment


                      #11
                      Hi Isomorphic,

                      Originally posted by Isomorphic View Post
                      Those are not translations into a different language, they are industry-specific terminology. It would be a hack to use a system intended for localization to implement industry-specific terminology instead.
                      OK, but what would be an non-hackish solution worth the necessary effort to the task in your opinion, if the application is the same otherwise?

                      Originally posted by Isomorphic View Post
                      It would break if, for example, we took the completely reasonable measure of caching translations per-locale.
                      That it exactly why I asked. If you cached, I'd need to use suffixed DataSources, and this would work, because the terminology is constant per customer.

                      Best regards
                      Blama

                      Comment


                        #12
                        OK, but what would be an non-hackish solution worth the necessary effort to the task in your opinion, if the application is the same otherwise?
                        No idea what the question is here.

                        To restate: if you use the i18n system to do something that's non-i18n, expect future breakage.

                        If there is customer-specific terminology - or anything customer specific that you are placing in a .ds.xml file - then you would need to use Dynamic DataSources and suffixing. Just for completeness: this yet again does not imply any need to turn off pooling.

                        And for completeness for any other readers: the use case of customer-specific terminology does not require using Dynamic DataSources. The different terms could simply be stored as field values instead of appearing directly in the .ds.xml files. Blama just happens to have chosen to implement this via Dynamic DataSources.

                        Comment


                          #13
                          Hi Isomorphic,

                          all agreed.
                          To have different .ds.xml and pooling, one needs dynamic DS with suffix. This is the suggested best practice and my long term goal.

                          IMHO it should also work without dynamic DS with suffix, if one disables pooling. But this will have a performance impact. Both correct?


                          Originally posted by Isomorphic View Post
                          It would break if, for example, we took the completely reasonable measure of caching translations per-locale.
                          Here I had a second thought. IMHO caching is not an option for you, because the feature built in 12.1 should allow for fieldname changes at runtime. So the developer can cache the names from the DB (e.g. in a singleton or in the session) in order to save lookup time, but you must run (pooled) DataSources always through a LocaleMessageProvider, if defined. Is this assumption correct?


                          Thinking about it, another question came into my mind: In a multi home environment, where all DS are dynamic and dynamic DS are pooled, can / should one configure the pool size? Because the number of DataSources can potentially be HUGE.
                          I see there is a section "Generic pooling config" in isomorphic_core_rpc.jar/framework.properties.
                          (The link in the file is dead and should be replaced by this one).

                          Best regards
                          Blama

                          Comment


                            #14
                            IMHO it should also work without dynamic DS with suffix, if one disables pooling. But this will have a performance impact. Both correct?
                            Yes, using Dynamic DataSources without suffixing but with pooling off would also be expected to work, but would have pretty bad performance. You would be forcing the system to parse and interpret DataSource files for every DataSource interaction.

                            Here I had a second thought. IMHO caching is not an option for you, because the feature built in 12.1 should allow for fieldname changes at runtime.
                            LocalMessageProvider does not allow for fieldName changes at runtime. It gives you a way to provide i18n strings embedded in DataSources. Probably you just mispoke here.

                            So the developer can cache the names from the DB (e.g. in a singleton or in the session) in order to save lookup time, but you must run (pooled) DataSources always through a LocaleMessageProvider, if defined. Is this assumption correct?
                            The docs don't currently define a lifecycle here, and it seems perfectly valid to assume that the same string in the same locale has the same translation, and cache based on that. This would break any plan of using the i18n system to do per-customer terminology changes, instead of its designed role as a localization system.

                            It's unlikely that you would need to configure the pool size unless you had many thousands of customers on a server with unusually constrained memory. But feel free to play with it and find the best settings for you.

                            Comment


                              #15
                              Hi Isomorphic,

                              Originally posted by Isomorphic View Post
                              Yes, using Dynamic DataSources without suffixing but with pooling off would also be expected to work, but would have pretty bad performance. You would be forcing the system to parse and interpret DataSource files for every DataSource interaction.
                              OK, thanks. Then it is the suffixed and pooled version for me, before I start having different DS per customer.



                              Originally posted by Isomorphic View Post
                              LocalMessageProvider does not allow for fieldName changes at runtime. It gives you a way to provide i18n strings embedded in DataSources. Probably you just mispoke here.
                              Yes. I meant field title.
                              Code:
                              E.G.: Resulting field (with or with DSGenerator) is
                              <field name="STATUS_RESELLER_ID" type="integer" displayField="STATUS_RESELLER_NAME">
                                          <title><fmt:message key="reseller"/></title>
                                          <validators>
                                              <validator type="isUnique" serverOnly="true">
                                                  <errorMessage><fmt:message key="validatorNameInUse" /></errorMessage>
                                              </validator>
                                          </validators>
                              </field>
                              This will then always be run through LocaleMessageProvider (if defined), when the complete DataSource is needed, e.g. for a DatasourceLoader call or when generating validation errors, correct?




                              Originally posted by Isomorphic View Post
                              The docs don't currently define a lifecycle here, and it seems perfectly valid to assume that the same string in the same locale has the same translation, and cache based on that. This would break any plan of using the i18n system to do per-customer terminology changes, instead of its designed role as a localization system.
                              Yes - for this to work per customer we'd need suffixed DataSources - having this is the plan before switching to 12.1 once it is a "p" version.
                              The only remaining question then is what to do in case of runtime field name changes, which I assume are not that often. This is only a problem if you do cache fmt results which you say do don't, correct?
                              Actually in the thread leading to the feature sponsorship this was kinda a requirement, even if not put into writing:
                              Originally posted by Blama View Post
                              Perhaps you are right. I do not expect frequent changes after initial setup. But if these happen, it's hard to tell the customer that they have to wait for a web application restart to see their changes. Also I think that it is way more easy for me to just intervene in the fmt-subclass.




                              Originally posted by Isomorphic View Post
                              It's unlikely that you would need to configure the pool size unless you had many thousands of customers on a server with unusually constrained memory. But feel free to play with it and find the best settings for you.
                              Thanks - just asking to be sure. Not to have to change anything is always the best.


                              Best regards
                              Blama

                              Comment

                              Working...
                              X