Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
Clear All
new posts

  • Isomorphic
    replied
    As we previously covered: these are assigned, but they do not have much priority as you are the only affected user, the only user who is likely to be affected possibly ever, and they don't block functionality. So while they are assigned, please expect an extended wait.

    At the moment we are dealing with regressions in the latest Firefox related to alignment and regressions in recent Chrome related to being able to detect keypress events on Mac. For obvious reasons, these kinds of issues take precedence, and there is no way for us to avoid them, as this is the browser vendors introducing bugs, as they do regularly.

    Leave a comment:


  • Blama
    replied
    Hi Isomorphic,

    thanks a lot. We'll try soon. As we do not want to roll out new nightlies without thorough manual testing, I'd like to wait for the others from #14 as well.
    Did you already look into them? All of them sat now for almost a month.

    Best regards
    Blama

    Leave a comment:


  • Isomorphic
    replied
    The duplication of records into "oldValues" has been eliminated for BatchUploader commit in SGWT 12.0p and newer releases. There's currently an issue blocking our nightly releases, but the next nightly builds of the mentioned branches should have the fix.

    Leave a comment:


  • Blama
    replied
    Hi Isomorphic,

    yes, all with 12.0p.

    Best regards
    Blama

    Leave a comment:


  • Isomorphic
    replied
    Are all of these issues being reported against the release, SGWT/SC 12.0? No branch was mentioned in any of the posts above.

    Leave a comment:


  • Isomorphic
    replied
    All of those are assigned, but none have high enough priority to have risen to the top of the queue so far.

    Leave a comment:


  • Blama
    replied
    Hi Isomorphic,

    did you have a look at the 4 four BatchUploader issues linked in #14? They all sat there about a week and you did not reply if you could reproduce, yet.

    Thank you & Best regards
    Blama

    Leave a comment:


  • Blama
    replied
    Hi Isomorphic,

    thanks. Yes, not important anymore as a serverside validator can handle this.
    Thanks for having a look at the generated XML. This will definitely move the limit of rows the BatchUploader can handle.

    @All: I opened a thread here on best practice for background processing if the amount of data becomes to much to handle by BatchUpload (or as I assume in general for user triggered background processing) here.

    Best regards
    Blama

    Leave a comment:


  • Isomorphic
    replied
    transformInput can be used as we described - we’re not sure how to elaborate further.

    Not important anymore, as you seem to have realized the situation you described in #1 can actually be solved with a validator after all, with no enhancements required.

    We will take a look at whether the XML can be reduced. Looks like there are easy wins.

    Leave a comment:


  • Blama
    replied
    Hi Isomorphic,

    thanks. I already assumed that BU is something special in this case and that adding the error won't be possible.
    I did not try DataSource.transformInputValue(), but having a look at the docs I can't see how this might help. Can you elaborate?

    For now, I found this solution, basically my own hasRelatedRecord-validator with the same optimization you use for fk-lookup (lookup every value only once):

    .ds.xml field:
    Code:
            <field foreignKey="T_RESELLER.ID" name="RESELLER_ID" batchUploadOperationId="fetchUploadData" batchUploadCaseSensitive="true" importStrategy="display" uploadFieldName="Reseller" displayField="RESELLER_NAME" type="integer" required="true">
                <title><fmt:message key="reseller" /></title>
                 <validators>
                    <validator type="serverCustom" dataSource="T_RESELLER" textField="RESELLER_NAME">
                        <serverObject [B]lookupStyle="factory" className="com.lmscompany.lms.server.worker.validator.ValidatorLookupSucceededFactory"[/B] />
                        <errorMessage>$errorMessage</errorMessage>
                    </validator>
                </validators>
            </field>
    ValidatorLookupSucceededFactory:
    Code:
    public class ValidatorLookupSucceededFactory {
    
        public static ValidatorLookupSucceeded create(HttpServletRequest httpServletRequest) throws Exception {
            Object validatorLookupSucceededObj = httpServletRequest.getAttribute("ValidatorLookupSucceededInstance");
            if (validatorLookupSucceededObj != null && !(validatorLookupSucceededObj instanceof ValidatorLookupSucceeded)) {
                Logger log = new Logger(ValidatorLookupSucceededFactory.class.getName());
                log.error("Object found in attribute \"ValidatorLookupSucceededInstance\" is not of type ValidatorLookupSucceeded.");
            }
            ValidatorLookupSucceeded validatorLookupSucceeded = (ValidatorLookupSucceeded) httpServletRequest
                    .getAttribute("ValidatorLookupSucceededInstance");
            if (validatorLookupSucceeded == null) {
                validatorLookupSucceeded = new ValidatorLookupSucceeded();
                httpServletRequest.setAttribute("ValidatorLookupSucceededInstance", validatorLookupSucceeded);
            }
            return validatorLookupSucceeded;
        }
    }
    Code:
    public class ValidatorLookupSucceeded {
        Logger log = new Logger(ValidatorLookupSucceeded.class.getName());
        @SuppressWarnings("rawtypes")
        MultiKeyMap map = new MultiKeyMap();
    
        @SuppressWarnings("unchecked")
        public boolean condition(Object value, Validator validator, String fieldName, Map<Object, Object> record, DataSource ds,
                HttpServletRequest httpServletRequest) throws Exception {
    
            log.info("Validating for field \"" + fieldName + "\", value \"" + (value == null ? "" : value.toString()) + "\"");
    
            // Ensure correct configuration of validator
            if (validator.getProperty("textField") == null || validator.getProperty("textField").toString().isEmpty()) {
                throw new Exception("Validator must define a \"textField\" attribute.");
            }
            String textField = validator.getProperty("textField").toString();
    
            // Exit early - nothing to validate
            if (value == null)
                return true;
            String strValue = value.toString();
    
            /*
             * Automatic lookup works as follow:
             * User sends text data for ID(!) field. Lookup for importStrategy="display" checks the parent DS for the text data.
             * Row found: Row-ID into ID field, user text data into display field
             * Row not found: User text data stays in ID field, display field is empty
             * Latter case is checked below.
             */
            if (value != null && (!record.containsKey(textField) || record.get(textField) == null || record.get(textField).toString().isEmpty())) {
                String errorMessage = String.format(I18n.getString("automaticLookupNotSuccessful", httpServletRequest), value);
                validator.addErrorMessageVariable("errorMessage", errorMessage);
                return false;
            }
    
            // Here we are sure to have CSV-Text and looked up ID and can either check against the DB or use already run check results.
            String strCSVString = record.get(textField).toString();
    
            // Check for prior DB queries with these keys
            if (map.containsKey(fieldName, strValue, strCSVString)) {
                Object o = map.get(fieldName, strValue, strCSVString);
                if (o != null) {
                    validator.addErrorMessageVariable("errorMessage", o.toString());
                    return false;
                } else
                    // Prior validation runs in this request did not find an error for this combination
                    return true;
            }
    
            // Validation of found ID against database
            if (validator.getProperty("dataSource") != null && !validator.getProperty("dataSource").toString().isEmpty()) {
                String dataSource = validator.getProperty("dataSource").toString();
    
                String errorMessage = ValidatorHasRelatedRecord.doValidation(value, dataSource, httpServletRequest);
                if (errorMessage != null) {
                    validator.addErrorMessageVariable("errorMessage", errorMessage);
                    map.put(fieldName, strValue, strCSVString, errorMessage);
                    return false;
                }
            }
    
            // No validation error found, store this information
            map.put(fieldName, strValue, strCSVString, null);
            return true;
        }
    }
    This is working for me and because of the storing of the validation results also more efficient than the default hasRelatedRecord-validator (this might be a nice optimization here as well).

    Is there anything that does invalidate this solution from your point of view?
    Because this way, once the open points regarding BatchUploader (Commits, Client side error messages, lookup-speed improvement when no entry is found, Queue success/failure; highest priority first) are solved, this should be fine for uploading ~5000 records with few columns.
    Serverside speed should be OK for this, problem is more the browser behavior for AJAX requests this big (as the client sends VERY verbose XML data):

    (For two records, that are <50 bytes in CSV, this data is in the ADD requests:
    Code:
    <transaction xmlns:xsi="http://www.w3.org/2000/10/XMLSchema-instance" xsi:type="xsd:Object"><transactionNum xsi:type="xsd:long">31</transactionNum><operations xsi:type="xsd:List"><elem xsi:type="xsd:Object"><values xsi:type="xsd:Object"><COUNTRY_ID xsi:type="xsd:long">64</COUNTRY_ID><RESELLER_ID xsi:type="xsd:long">18</RESELLER_ID><COUNTRY_NAME>FR</COUNTRY_NAME><ZIPCODE_FROM>02130</ZIPCODE_FROM><_selection_42 xsi:type="xsd:boolean">false</_selection_42><_embeddedComponents_isc_BatchUploader_0_grid xsi:nil="true"/><RESELLER_NAME>ABC-Softwarehaus GmbH (B)</RESELLER_NAME></values><operationConfig xsi:type="xsd:Object"><dataSource>V_RESELLER_SALESAREA_UPLOAD__6__2019_07_03_11_34_27</dataSource><repo xsi:nil="true"/><operationType>add</operationType><textMatchStyle>exact</textMatchStyle></operationConfig><appID>builtinApplication</appID><operation>V_RESELLER_SALESAREA_UPLOAD__6__2019_07_03_11_34_27_add</operation><oldValues xsi:type="xsd:Object"><COUNTRY_ID xsi:type="xsd:long">64</COUNTRY_ID><RESELLER_ID xsi:type="xsd:long">18</RESELLER_ID><COUNTRY_NAME>FR</COUNTRY_NAME><ZIPCODE_FROM>02130</ZIPCODE_FROM><_selection_42 xsi:type="xsd:boolean">false</_selection_42><_embeddedComponents_isc_BatchUploader_0_grid xsi:nil="true"/><RESELLER_NAME>ABC-Softwarehaus GmbH (B)</RESELLER_NAME></oldValues></elem><elem xsi:type="xsd:Object"><values xsi:type="xsd:Object"><COUNTRY_ID xsi:type="xsd:long">64</COUNTRY_ID><RESELLER_ID xsi:type="xsd:long">19</RESELLER_ID><COUNTRY_NAME>FR</COUNTRY_NAME><ZIPCODE_FROM>02130</ZIPCODE_FROM><_selection_42 xsi:type="xsd:boolean">true</_selection_42><_embeddedComponents_isc_BatchUploader_0_grid xsi:nil="true"/><RESELLER_NAME>Hansestadt Softwarekontor (HH)</RESELLER_NAME></values><operationConfig xsi:type="xsd:Object"><dataSource>V_RESELLER_SALESAREA_UPLOAD__6__2019_07_03_11_34_27</dataSource><repo xsi:nil="true"/><operationType>add</operationType><textMatchStyle>exact</textMatchStyle></operationConfig><appID>builtinApplication</appID><operation>V_RESELLER_SALESAREA_UPLOAD__6__2019_07_03_11_34_27_add</operation><oldValues xsi:type="xsd:Object"><COUNTRY_ID xsi:type="xsd:long">64</COUNTRY_ID><RESELLER_ID xsi:type="xsd:long">19</RESELLER_ID><COUNTRY_NAME>FR</COUNTRY_NAME><ZIPCODE_FROM>02130</ZIPCODE_FROM><_selection_42 xsi:type="xsd:boolean">true</_selection_42><_embeddedComponents_isc_BatchUploader_0_grid xsi:nil="true"/><RESELLER_NAME>Hansestadt Softwarekontor (HH)</RESELLER_NAME></oldValues></elem></operations></transaction>
    Improvement suggestion: Is oldValues needed here? IMHO it can be removed, saving ~50% of the request size and XML generation.
    )

    In general it turns out for me, that BatchUploader is good for small CSVs (<1000 rows). If you agree, can you add it to the docs, so that users that already know they are going to need to be able to upload more rows, won't start this road?

    So an upcoming task here is definitely going to be to move this processing to the background, meaning uploading of CSV as a file, pre-process it to a table serverside (most likely using serverside DataImport) and showing it as normal databound ListGrid, then allowing the user to change data in the integration table and then Batch transfer it to the target table.
    As this is a background operation, we'll need some success or progress display, so this is going to be a bit more involved.

    Best regards
    Blama

    Leave a comment:


  • Isomorphic
    replied
    Have you noticed the docs about dataSource.transformInputValue()? That would allow you to leave NULL, or some marker value, for rows where no mapping exists. At that point a hasRelatedRecord validator will flag the row as an error, so you get normal validation errors (you should not try to work with the specialized error format of BatchUploader, which is an internal detail).

    Leave a comment:


  • Blama
    replied
    Hi Isomorphic,

    I tried doing so in my BatchUpload-DMI after validateUploadData(), as written in #11.
    I came this far:
    Code:
            // Add validation errors if the lookup, that already happened, did not succeed
            Map<String, String> lookupFields = new LinkedHashMap<String, String>();
    
            // Collect importStrategy="display" fields
            for (String fieldName : dsRequest.getDataSource().getFieldNames()) {
                DSField dsField = dsRequest.getDataSource().getField(fieldName);
    
                if (!"display".equals(dsField.getProperty("importStrategy")))
                    continue;
                String name = dsField.getProperty("name");
                String displayField = dsField.getProperty("displayField");
                lookupFields.put(name, displayField);
            }
    Next step would be to loop over the data and add errors, but these differ from normal validation errors.
    Output of RPC-Tab in the developer console for a BatchUpload:
    Code:
    {
        affectedRows:0,
        data:{
            discardedColumns:[
            ],
            gridRows:[
                {
                    COUNTRY_ID:64,
                    COUNTRY_NAME:"FR",
                    ZIPCODE_FROM:":00000",
                    ZIPCODE_TO:"206 + 454"
                },
                {
                    COUNTRY_ID:64,
                    RESELLER_ID:12,
                    RESELLER_NAME:"317",
                    COUNTRY_NAME:"FR",
                    ZIPCODE_FROM:"02110"
                },
                {
                    COUNTRY_ID:64,
                    RESELLER_ID:318,
                    COUNTRY_NAME:"FR",
                    ZIPCODE_TO:"02130"
                },
                {
                    COUNTRY_ID:64,
                    RESELLER_ID:12,
                    COUNTRY_NAME:"FR",
                    ZIPCODE_FROM:"02130"
                }
            ],
    [B]errors[/B]:[
                {
                    errors:{
                        RESELLER_ID:[
                            "Field is required"
                        ]
                    },
    [B]rowNum:0[/B]
                },
                {
                    errors:{
                        RESELLER_ID:[
                            "\"318\" not found in existing database records"
                        ],
                        ZIPCODE_TO:[
                            "Please set either \"State\" or \"ZIP-Code from\" (and optionally \"ZIP-Code to\")."
                        ]
                    },
    [B]rowNum:2[/B]
                }
            ]
        },
        invalidateCache:false,
        isDSResponse:true,
        operationType:"add",
        queueStatus:0,
        status:0
    }
    As you can see, there are stacked errors with a rowNum-attribute for the inner errors.
    How can I add to this, meaning add errors for rows that don't have an errors yet and also amend the inner-errors list for rows with errors?

    Thank you & Best regards
    Blama

    Leave a comment:


  • Blama
    replied
    Hi Isomorphic,

    I still see this as an error given the description of display, but let's agree to disagree.
    Hopefully this will be minor to us. Also, I might be able to add validation errors myself like this(?):

    The general lookup mechanism is as follow, right?
    Lookup:
    • If found, CSV-String to displayField, looked-up ID to ID.
    • If not found, CSV-String to ID, displayField null
    I might be able to catch this situation in my BatchUpload-DMI after
    Code:
    DSResponse validatedData = batchUpload.validateUploadData(response);
    and add validation errors myself, correct?

    Or perhaps even a server-side validator can do this?

    Best regards
    Blama

    Leave a comment:


  • Isomorphic
    replied
    When we said:

    You seem to be trying to use BatchUploader to go directly to production-ready data with no intervening review step.
    We meant no human intervening review step. The way we ourselves have typically used BatchUploader, the uploaded data goes somewhere temporary, and there's a final review before it goes to actual production tables - not by the end user who is uploading, but by some kind of admin. In that case, we need display mapping to work just how it does, because again, we don't want to burden the uploader with errors they don't understand how to resolve.

    So please stop claiming this is a bug when it's working the way we actually need it to!

    We do understand that in your case of ambiguous id/display mapping (issue from post #1), you'll have to do something inconvenient like map to fake IDs first so you know which ones were mapped and not mapped. That's the kind of interesting edge case where Feature Sponsorship is very useful to you, because you get to modify the framework to solve something that, frankly, probably won't ever come up for anyone else, yet we still charge as though you're adding a feature for general use.

    In sum: we fully understand how this behavior is inconvenient for you, but it is not at all a bug, and we're willing to add a feature that makes your use case easier and more efficient as a very small Feature Sponsorship.

    Leave a comment:


  • Blama
    replied
    Hi Isomorphic,

    right now, I'm talking about a client side BatchUploader import with "Upload" and "Commit" button, which is used for bulk upload (typical amount: 100-2000 rows) of data.
    We do verify the data with a bunch of validators. Not matching rows are flagged by the validators. Then the user can correct errors with "Prev"/"Next" errors buttons and finally persist via "Commit". This is a straight forward usecase IMHO.

    Any row flagged as error (low amount) can be manually reviewed, but you can't expect the end user to compare every row and every cell in detail.
    If you warned here as potentially wrong, this would also be fine.

    With the permissive (as you call it) way it is now the result of an "Commit" depends on the existing lookup data with no warning or indication whatsoever. I'm sure you agree that this is a bad user experience.

    As you also write in #8:
    Originally posted by Isomorphic View Post
    So again, you have a way to be strict if you want
    I assume you mean using a hasRelatedRecord-validator. This is true ONLY for #3, not for #1, so I'm not sure if you understand my problem here.
    In #1, for the CSV row
    Code:
    1 "2"
    the final row after "Commit" will link to itemId:2 which does stand for SKU:"3".


    Also the docs for display say:
    Code:
    The import process expects values in the import dataset to be display values, and it will transform them to the corresponding underlying keys
    A silent "if there is no key found I'll keep the display value even if that has a different meaning if used as key itself" is really not something anyone would expect.

    Aside of this one could also use serverside DataImport, which has the same problem and no way of intervening.

    Best regards
    Blama

    Leave a comment:

Working...
X